Shared posts

19 Jul 16:53

New addition to the Velo Lumino program: The AT Saddlelite taillight mount

by somervillebikes

I’ve been drawn to taillights for many years, which is one of the reasons I started Velo Lumino, on a whim. I wanted a dynamo-driven taillight (among other things) for my bike, with a particular look that did not exist.

Since then, I’ve continued to be fascinated by taillights. One particular application of them that I’ve been admiring for awhile is the custom work by Mitch Pryor of MAP Bicycles (sadly defunct, fallen victim to the 2018 California fires) and Brian Chapman of Chapman Cycles in mounting small taillights to the saddle. Both these builders have custom-mounted a tiny B&M taillight (or its more high-brow fraternal twin, the SON Rear Light) to the saddle. I love these lights for their tiny size and light weight. They are also inexpensive, at less than $30 (the more upscale SON Rear Light has the same lens and circuitry, but packaged in a bespoke CNC milled and anodized alloy shell, and is heavier and costs three times as much). These lights, and also the Supernova E3 2 taillight, are made for mounting to rear racks via standard 50mm bolt spacing. I bought the tiny B&M simply because I like the look of them (that sounds silly and a little bit on-the-spectrum, I get it), even though I knew I wouldn’t be using it since my bikes were already well-equipped, taillight-wise. Holding it in my hands and marveling at its diminutive size and near weightlessness got me thinking… this is such a cool light… could I design a widget that would allow one to mount this little gem (or any of the other similar 50mm spaced tiny taillights) to any bike without custom braze-ons or one-off fabrication?

I started with a couple of 3D printed prototypes. Both were centered around the idea of having standard 50mm-spaced mounting holes for the tiny B&M light (it’s full name is the B&M Toplight Line Small), which is the Euro-standard for mounting taillights to rear racks.

I quickly centered on having the mount attach to the saddle rails, since they are the most standardized part of any saddle– I wanted this mount to work on any saddle. And I wanted to be so small that it’s barely noticeable. I wanted it to appear as though it were “one with the light and bike”. My first prototype used two lobes that hook onto the rails, with a set-screw in one lobe to create compression between the rails, to keep the mount secure (since the distance between the rails varies slightly between saddles, and the rails also flex somewhat):

IMG_4408IMG_4410I liked this prototype for its small size and for the way it kept the light very close to the rails– the mount itself is barely visible. Plus, depending on the height of the saddle leather, it also could be mounted upside-down, with the light above the rails, to make it look really stealthy:

IMG_4415

IMG_4413

Definitely headed in the right direction. But I wasn’t happy with the set-screw tightening setup. Even though I never experienced any movement while testing this prototype (I rode my bike with it this way for several weeks on some rough roads), if it were yanked hard enough, it could be ripped from the rails. I wanted it to be rock-solid and theft-proof.

My next prototype was a two-piece system that clamps onto the rails, in the same manner that saddle rails clamp to a seat post, but miniaturized. This worked great–SUPER strong! I ended up making some minor tweaks and took the plunge with the CNC mill work.

So I introduce the Velo Lumino AT Saddlelite (get it?) taillight mount:

IMG_4853

IMG_4851

IMG_4848

IMG_4854

IMG_4855

IMG_4852

The mount is hollowed out in the center to save weight and to allow the B&M wire to be routed through the mount– the wire is invisible this way.

IMG_4858

This little mount is rock-solid, lightweight (13g) and unobtrusive. I have added it to the Velo Lumino webstore, at a price of $42. I am ALSO selling the B&M taillights, in your choice of red or clear lenses (both illuminate red), for $26. You can purchase both together for $62, and save $6. What’s really great about this combo is that it’s a tiny yet bright taillight with standlight capacitor, that mounts to any saddle without modification, and the combo weigh only 30g including all mounting bolts– lighter than any fender taillight I know of.

I chose not to offer the mount in a high-polish finish, unlike my other components. In this case, the mount is designed not to be seen, so I didn’t feel a polished finish was necessary, and it just adds cost. So it’s only offered in its raw milled finish, shown in the photo above. It is a smooth, bright finish, and if you wanted, you could buff it up easily with some metal polish, or paint it black. But really, it’s barely visible when mounted!

Of course, if you want a totally integrated lighting solution, this means internal frame wiring– as is the case with the any integrated dynamo lighting system. In this case, you will want to drill a small hole in the top of your seat post to route the wire into the frame.

What’s next on the horizon? A variation of the Saddlelite that will mount a Velo Lumino seat tube taillight via a single centered M6 threaded hole. The prototyping is already done:

IMG_4859

IMG_4791

IMG_4794

IMG_4792

The seat tube taillight variant of the Saddlelite will be available late summer / early fall… stay tuned!

 

23 Jun 19:35

How much ‘work’ should my online course be for me and my students?

Dave Cormier, Dave’s Educational Blog, Jun 22, 2020
Icon

This is a really good reflection on time and online learning. It begins by moving away from the 'credit hour' as defined by classroom seat time, and looks at how much work (as measured by time) a student is expected to put in, and how much an instructor is expected to put in. There are good bits scattered throughout this article. Like: "keep trying to think about it from the perspective of what a student is actually going TO DO." And: "try to stay focus(ed) on what it is possible to do, not what we ‘need’ to do." And: "The standardization police have been telling us for years that each student must learn the same things. Poppycock." And: "if your TA is being paid for 45 hours, that’s as many as they are supposed to work. If your design means they run out of hours, you are uh… going to have to do the rest of the grading."

Web: [Direct Link] [This Post]
23 Jun 19:35

Towards fall 2020 in higher education: what an “in person semester” really means

Bryan Alexander, Jun 22, 2020
Icon

The plan for most American univresities this fall seems to be to advertise that they will be open for in-person classes, then to shift to online learning when needed. Writes Bryan Alexander, who has been tracking their announcements, "Very few campuses are proclaiming a wholly online semester, because (among other reasons) they fear losing enrollment.  Yet it seems like many are quietly planning on the possibility." Is it too soon to say that the sector is planning to bait and switch?

Web: [Direct Link] [This Post]
23 Jun 19:34

"Attending" a virtual conference

Erin Conwell, Jun 22, 2020
Icon

Are you missing in-person conferences? Perhaps you've forgotten the little details that make them, um, memorable. Classic Twitter thread. " To help simulate the real thing, I'll set out a picked-over tray of mini-muffins, soggy cut fruit, and some weak coffee, and then whisk them away just as he approaches the table." Image: Alan Levine, Conference Chicken.

Web: [Direct Link] [This Post]
23 Jun 19:34

Ethics in Geo

by Tom MacWright

This year I was invited to speak at FOSS4G, the premier conference for open source geospatial tech. Then, of course, everything happened and the conference isn’t. It’s a pity – I was really excited for it and very thankful that the organizers were willing to take a chance on me, despite my several years of remove from the industry. I had chosen a topic that’s near to my heart and also might benefit from a little distance. Not being one to leave scraps on the shop floor, here are some of the ideas: on ethics in geospatial software.

The only new market is ads

Part of the optimism of the early 2010s in geo was the sense that we might uncover new markets: that the old markets of oil & gas, real estate, military, and government could be supplemented by something new. Maybe individual farmers would benefit from real-time data, or social apps like Foursquare would be map-first. There had to be money somewhere.

Ten years later, the only new market is advertising. Google Maps became a platform for advertising. Everyone else learned that passive location tracking (’telemetry‘) was a nice complement to their consumer behavioral analysis.

With the exception of high-end niches like hiking maps, consumer geospatial is an ad-supported industry. You can argue about whether ads can be ethical, but there’s little debate that they currently aren’t: targeted advertising is currently as personal and invasive as is legally allowed, and the USA sets almost no limit. All of the data that is collected and ‘anonymized’ by ad-tech is potentially re-identifiable, as I wrote in 2018, and geospatial is even more-so than most forms.

It’s hard to overstate the influence of intelligence & oil in geo

From 2006-2010, my day job was creating software which would help oil companies find oil more effectively; help federal agencies find "terrorists" more effectively; and help domestic police organizations track "gang members" more effectively.

I admire Christopher’s openness about this, and have a few minor stories of my own. Here’s one of them.

In 2011, I worked on a specification called MBTiles, which stored image files of parts of a map - map tiles - in an SQLite database, which could conveniently be stored as a single file. Basically the problem we were trying to solve was that FAT filesystems had problems storing thousands of files and that writing thousands of files to a disk or transferring them across the web had really high overhead. So we just wanted one file that we could use without unzipping or unpacking: hence, MBTiles. It was a silly solution to a silly problem and it succeeded because we didn’t overthink it.

A little while later, the OGC reached out about standardizing MBTiles, or something like it. It was the latter: they wanted something a lot more complicated. I, at 24 and with a serious disdain for meetings, was included in the meetings. The whole direction of the project, what would become GeoPackage, was really hard to follow: the web apps and open source tools that I could imagine had no need for a lot of the things that were passed down as ‘requirements’ from the people who were footing the bill.

Eventually, there was a proposal for a certain table to be included with some XML data indicating cloud cover, with a PDF linked from some military site. Then folks started referring to imagery from ‘the bird.’ I was around then I realized we were designing a format for imagery to be sent from drones to soldiers.

A bunch of readers probably called that one. Looking at the list of OGC members, you can see the military industrial complex in full force: we’ve got the Army, Lockheed, the Australian DoD, DHS, the NGA, and a whole lot of awkwardly-named defense contractors. There are plenty of open secrets about satellite companies with ≥ 50% revenue from defense and agencies running private copies of OpenStreetMap technology to manage their data.

Oil & defense, of course, haven’t just been consuming software or taking advantage of our hard work. Our system of projections & datums is largely thanks to oil money - originally the European Petroleum Survey Group - the EPSG in EPSG, and now the International Association of Oil & Gas Producers. Without In-Q-Tel, there’s no Keyhole, so no Google Earth or KML. Without dependable intelligence contracts, who knows if Maxar’s constellation of satellites would exist. Or GPS.

But the fact that these interests were so involved in the modernization of the field and are now the largest and most profitable contracts to win shouldn’t be reason to accept them indefinitely.

Doing things is possible

I think that this is bad: that the military-industrial complex is either straightforwardly bad, or at least not good enough that you should donate your time to them. But we all have been donating our time to them, via open source software.

There are ways to change the status quo. Consider ethos licensing, which has been extremely divisive. But do yourself a favor by reading lawyers instead of commenters: Heather Meeker’s skeptical take and Kyle Mitchell’s more positive one. Ethos licensing is a way of using the power of copyright to do more than just protect your rights or give them away: it’s the idea that you could prevent human rights abusers from using your work via legal means.

It probably won’t work, at least how you would hope. We’re in the age of legal realism, in which power and money matters more than the letter of the law. And you probably shouldn’t call it ‘open source’ because the OSI, the organization that controls the term (somehow?) says that ethos licensing isn’t open source. But using an exotic license would work in the case of large companies which have explicit lists of allowed & banned licenses, like Google.

There’s a lot more

This is the facet of ethics in geo that’s been on my mind recently, alongside the pressing need to recreate from the ground up the American police system. But there is so much more and I don’t want to give some blasé ‘read critical theory’ take here. For starters, there’s the representation problem in which maps only show legal and political boundaries and make it easy to forget recently-murdered native peoples. And the terrifying ease of re-identification of most ‘anonymous’ datasets, something that only a handful of experts know how to resolve.

How about you?

I want to be clear about who this is for. Some folks are working for ethically fishy organizations trying to make them better, and some might be right. Some people need to hold down a steady job because they have a family and a mortgage. Some people just love writing code that will be useful to others.

When I was working in geo, I mainly fit into that last bit: fortunate enough to know I could be making a little more if I risked my ethics but that I didn’t need it. The folks in that situation are the ones I judge the most, who are the most similar to me, the ones who have lots of options and knowingly take the one where you’re making the world worse. Why?

For the rest of us, the folks who work on open source software or for companies that make software, the question gets harder. Without ethos licensing, anyone can use the things I write. And while once I wondered who would benefit from that technology, now I know: journalists, sure, and neat startups, and some commercial software outfits.

But there’s no question of whether parts of the military and the oil industry use open source geospatial software, they do. We can only wonder whether the most reprehensible parts, like Border Patrol, an organization that had one officer or agent arrested per day from 2005 to 2012, are also using it. Our complicity is limited: without intention, sure, but no longer without knowledge. And until we figure out ethos licensing or something similar, there’s no way to stop it.

23 Jun 19:34

The Weekly Status Update

by Richard Millington

A client of mine sends a weekly status update.

It includes three things:

  1. What has been accomplished in the past week.
  2. The focus areas for the coming week.
  3. The lowlights of the week (what hasn’t gone well).

I like the format.

It keeps everyone focused on proactively developing a community instead of reactively responding to it.

23 Jun 19:34

The Ed-Tech Imaginary

I gave this keynote this morning at the ICLS Conference, not in Nashville as originally planned (which is a huge bummer as I really want some hot chicken)

As I sat down to prepare this talk, I had to go back into my email archives to see what exactly I'd said I'd say. It was October 29, 2019 when I sent the conference organizers my title and abstract. October 29, 2019 — lifetimes ago. Do you remember what happened on October 29? Lt. Col. Alexander Vindman testified in front of Congress about President Trump's phone call with the leader of the Ukraine. Lifetimes ago. I'm guessing there are plenty of you who submitted your conference papers in the "Before Times" and now struggle too to decide if and how you need to rethink and rewrite your whole presentation.

I do still want to talk to you today about "the ed-tech imaginary," and I think some of the abstract I wrote last year still stands:

How do the stories we tell about the history and the future of education (and education technology) shape our beliefs about teaching and learning — the beliefs of educators, as well as those of the general public?

I do wonder, however, if or how much our experiences over the past four months or so have shifted our beliefs about the possibilities of ed-tech — our experiences as teachers and students and parents certainly, but also our experiences simply as participants and observers of the worlds of work-from-home and Zoom-school. Do people still imagine, do people still believe that technology is the silver bullet? I don't know. I do know I hear statements like this a lot: "I can't imagine how we will go back to face-to-face classes in the fall." And at the same time, I hear this: "I can't imagine there won't be football." What do we now imagine for the future?

I'd like to think that it's not just the pandemic that has changed us and changed our expectations of school and of ed-tech. So too, I hope, have the scenes of racist police brutality and the protests that have arisen in response. Black lives matter.

We can say "Black lives matter," but we must also demonstrate through our actions that Black lives matter, and that means we must radically alter many of our institutions and practices, recognizing their inhumanity and carcerality. And that includes, no doubt, ed-tech. How much of ed-tech is, to use Ruha Benjamin's phrase, "the new Jim Code"? How much of ed-tech is designed by those who imagine students as cheats or criminals, as deficient or negligent?

"To see things as they really are," legal scholar Derrick Bell reminds us, "you must imagine them for what they might be."

I write a lot about the powerful narratives that shape the ways in which we think about education and technology. But I won't lie. I tend to be pretty skeptical about exercises in "reimagining school." "Reimagining" is a verb that education reformers are quite fond of. And "reimagining" seems too often to mean simply defunding, privatizing, union-busting, dismantling, outsourcing.

We must recognize that the imagination is political. And if Betsy DeVos is out there "reimagining," then we best be resisting not just dreaming alongside her.

The "ed-tech imaginary," as I have argued elsewhere, is often the foundation of policies. It's certainly the foundation of keynotes and marketing pitches. It includes the stories we invent to explain the necessity of technology, the promises of technology; the stories we use to describe how we got here and where we are headed. Tall tales about "factory-model schools" and so on.

Despite all the talk about our being "data-driven," about the rigors of "learning sciences" and the like, much of the ed-tech imaginary is quite fanciful. Wizard of Oz pay-no-attention-to-the-man-behind-the-curtain kinds of stuff.

This storytelling is, nevertheless, quite powerful rhetorically, emotionally. It's influential internally, within the field of education and education technology. And it's influential externally — that is, in convincing the general public about what the future of teaching and learning might look like, should look like, and making them fear that teaching and learning today are failing in particular ways. This storytelling hopes to set the agenda.

In a talk I gave last year, I called this "ed-tech agitprop" — the shortened name of the Soviet Department for Agitation and Propaganda which was responsible for explaining communist ideology and convincing the people to support the party. This agitprop took a number of forms — posters, press, radio, film, social networks — all in the service of spreading the message of the revolution, in the service of shaping public beliefs, in the service of directing the country towards a particular future. I think we can view the promotion of ed-tech as a similar sort of process — the stories designed to convince us that the future of teaching and learning will be a technological wonder. The "jobs of the future that don't exist yet." The push for everyone to "learn to code."

Arguably, one of the most powerful, most well-known stories of the future of teaching and learning looks like this:

Now, you can talk about the popularity of TED Talks all you want — how the ideas of Sal Khan and Sugata Mitra and Ken Robinson have been spread to change the way people imagine education — but millions more people have watched Keanu Reeves, I promise you. This — The Matrix — has been a much more central part of our ed-tech imaginary than any book or article published by the popular or academic press. (One of the things you might do is consider what other stories you know — movies, books — that have shaped our imaginations when it comes to education.)

The science fiction of The Matrix creeps into presentations that claim to offer science fact. It creeps into promises about instantaneous learning, facilitated by alleged breakthroughs in brain science. It creeps into TED Talks, of course. Take Nicholas Negroponte, for example, the co-founder of the MIT Media Lab who in his 2014 TED Talk predicted that in 30 years time (that is, 24 years from now), you will swallow a pill and "know English," swallow a pill and "know Shakespeare."

What makes these stories appealing or even believable to some people? It's not science. It's "special effects." And The Matrix is, after all, a dystopia. So why would Matrix-style learning be desirable? Maybe that's the wrong question. Perhaps it's not so much that it's desirable, but it's just how our imaginations have been constructed, constricted even. We can't imagine any other ideal but speed and efficiency.

We should ask, what does it mean in these stories -- in both the Wachowskis' and Negroponte's -- to "know"? To know Kung Fu or English or Shakespeare? It seems to me, at least, that knowing and knowledge here are decontextualized, cheapened. This is an hollowed-out epistemology, an epistemic poverty in which human experience and human culture and human bodies are not valued. But this epistemology informs and is informed by the ed-tech imaginary.

"What if, thanks to AI, you could learn Chinese in a weekend?" an ed-tech startup founder once asked me — a provocation that was meant to both condemn the drawbacks of traditional language learning classroom and prompt me, I suppose, to imagine the exciting possibilities of an almost-instanteous fluency in a foreign language. And rather than laugh in his face — which, I confess that I did — and say "that's not possible, dude," the better response would probably have been something like: "What if we addressed some of our long-standing biases about language in this country and stopped stigmatizing people who do not speak English? What if we treated students who speak another language at home as talented, not deficient?" Don't give me an app. Address structural racism. Don't fund startups. Fund public education.

This comic appeared in newspapers nationwide in 1958 — the same year that psychologist B. F. Skinner published his first article in Science on education technology. You can guess which one more Americans read.

Push-button education. Tomorrow's schools will be more crowded; teachers will be correspondingly fewer. Plans for a push-button school have already been proposed by Dr. Simon Ramo, science faculty member at California Institute of Technology. Teaching would be by means of sound movies and mechanical tabulating machines. Pupils would record attendance and answer questions by pushing buttons. Special machines would be "geared" for each individual student so he could advance as rapidly as his abilities warranted. Progress records, also kept by machines, would be periodically reviewed by skilled teachers, and personal help would be available when necessary.

The comic is based on an essay by Simon Ramo titled "The New Technique in Education" in which he describes at some length a world in which students' education is largely automated and teachers are replaced with "learning engineers" — a phrase that has become popular again in certain ed-tech reform circles. This essay and the comic, I'd argue, helped establish an ed-tech imaginary that is familiar to us still today. Push-button education is "personalized learning." Personalized learning is push-button education.

(Ramo is better known in other circles as "the father of the intercontinental ballistic missile," incidentally.)

Another example of the ed-tech imaginary from post-war America: The Jetsons. The Hanna-Barbera cartoon — a depiction of the future of the American dream — appeared on prime-time television during the height of the teaching machine craze in the 1960s. Mrs. Brainmocker, young Elroy Jetson's robot teacher (who, one must presume by her title, was a married robot teacher), appeared in just one episode — the very last one of the show's original run in 1963.

Mrs. Brainmocker was, of course, more sophisticated than the teaching machines that were peddled to schools and to families at the time. The latter couldn't talk. They couldn't roll around the classroom and hand out report cards. Nevertheless, Mrs. Brainmocker’s teaching — her functionality as a teaching machine, that is — is strikingly similar to the devices that were available to the public. Mrs. Brainmocker even looks a bit like Norman Crowder's AutoTutor, a machine released by U.S. Industries in 1960, which had a series of buttons on its front that the student would click on to input her answers and which dispensed a paper read-out from its top containing her score. An updated version of the AutoTutor was displayed at the World's Fair in 1964, one year after The Jetsons episode aired.

Teaching machines and robot teachers were part of the Sixties' cultural imaginary — perhaps that's the problem with so many Boomer ed-reform leaders today. But that imaginary — certainly in the case of The Jetsons — was, upon close inspection, not always particularly radical or transformative. The students at Little Dipper Elementary still sat in desks in rows. The teacher still stood at the front of the class, punishing students who weren't paying attention. (In this case, that would be school bully Kenny Countdown, who Mrs. Brainmocker caught watching the one-millionth episode of The Flintstones on his TV watch.)

Not particularly radical or transformative in terms of pedagogy and yet utterly exclusionary in terms of politics. This ed-tech imaginary is segregated. There are no Black students at the push-button school. There are no Black people in The Jetsons — no Black people living the American dream of the mid-twenty-first century.

To borrow from artist Alisha Wormsley, "there are Black people in the future." Pay attention when an imaginary posits otherwise. To decolonize the curriculum, we must also decolonize the ed-tech imaginary.

There are other stories, other science fictions that have resonated with powerful people in education circles. Mark Zuckerberg gave everyone at Facebook a copy of the Ernest Cline novel Ready Player One, for example, to get them excited about building technology for the future — a book that is really just a string of nostalgic references to Eighties white boy culture. And I always think about that New York Times interview with Sal Khan, where he said that "The science fiction books I like tend to relate to what we're doing at Khan Academy, like Orson Scott Card's 'Ender's Game' series." You mean, online math lectures are like a novel that justifies imperialism and genocide?! Wow.

There are other stories, of course.

The first science fiction novel, published over 200 years ago, was in fact an ed-tech story: Mary Shelley's Frankenstein. While the book is commonly interpreted as a tale of bad science, it is also the story of bad education — something we tend to forget if we only know the story through the 1931 film version. Shelley's novel underscores the dangerous consequences of scientific knowledge, sure, but it also explores how knowledge that is gained surreptitiously or gained without guidance might be disastrous. Victor Frankenstein, stumbling across the alchemists and then having their work dismissed outright by his father, stoking his curiosity so much that a formal (liberal arts?) education can't change his mind. And the creature, abandoned by Frankenstein and thus without care or schooling, learning to speak by watching the De Lacey family, learning to read by watching Safie, "the lovely Arabian," do the same, finding and reading Paradise Lost.

"Remember that I am thy creature," the creature says when he confronts Frankenstein, "I ought to be thy Adam; but I am rather the fallen angel, whom thou drivest from joy for no misdeed. Everywhere I see bliss, from which I alone am irrevocably excluded. I was benevolent and good — misery made me a fiend." Misery and, perhaps, reading Milton.

I've recently finished writing a book, as some of you know, on teaching machines. It's a history of the devices built in the mid-twentieth century — before computers — that psychologists like B. F. Skinner believed could be used to train children (much as he trained pigeons) through operant conditioning. Teaching machines would, in the language of the time, "individualize" education. It's a book about machines, and it's a book about Skinner, and it's a book about the ed-tech imaginary.

B. F. Skinner was, I'd argue, one of the best known public intellectuals of the twentieth century. His name was in the newspapers for his experimental work. His writing was published in academic journals as well as the popular press. He was on television, on the cover of magazines, on bestseller lists.

Incidentally, here's how Ayn Rand described Skinner's infamous 1971 book Beyond Freedom and Dignity, a book in which he argued that freedom was an illusion, a psychological "escape route" that convinced people their behaviors were not controlled or controllable:

"The book itself is like Boris Karloff's embodiment of Frankenstein's monster,” Rand wrote, "a corpse patched with nuts, bolts and screws from the junkyard of philosophy (Pragmatism, Social Darwinism, Positivism, Linguistic Analysis, with some nails by Hume, threads by Russell, and glue by the New York Post). The book's voice, like Karloff's, is an emission of inarticulate, moaning growls — directed at a special enemy: 'Autonomous Man.'"

Note: I only cite Ayn Rand here because of the Frankenstein reference. It's also a reminder that the enemy of your enemy need not be your friend. And it's always worth pointing how much of the Silicon Valley imaginary — ed-tech or otherwise — is very much a Randian fantasy of libertarianism and personalization.

Part of the argument I make in my book is that much of education technology has been profoundly shaped by Skinner, even though I'd say that most practitioners today would say that they reject his theories; that cognitive science has supplanted behaviorism; and that after Ayn Rand and Noam Chomsky trashed Beyond Freedom and Dignity, no one paid attention to Skinner any more — which is odd considering there are whole academic programs devoted to "behavioral design," bestselling books devoted to the "nudge," and so on.

In 1971, the same year that Skinner's Beyond Freedom and Dignity was published, Stanley Kubrick released his film A Clockwork Orange. And I contend that the movie did much more damage to Skinner's reputation than any negative book review.

To be fair, the film, based on Anthony Burgess's 1963 novel, did not depict operant conditioning. Skinner had always argued that positive behavioral reinforcement was far more effective than conduct aversion therapy — than the fictional "Ludovic Technique" that A Clockwork Orange portrays.

A couple of years after the release of the film, Anthony Burgess wrote an essay (unpublished) reflecting on his novel and the work of Skinner. Burgess made it very clear that he opposed the kinds of conditioning that Skinner advocated — even if, as Skinner insisted, behavioral controls and social engineering could make the world a better place. "It would seem," Burgess concluded, "that enforced conditioning of a mind, however good the social intention, has to be evil." Evil.

Many people who tell the story of ed-tech say that Skinner's teaching machines largely failed because computers came along. But what if what led to the widespread rejection of teaching machines — for a short time, at least — was in part that the "ed-tech imaginary" shifted and we recognized the dystopia, the inhumanity, the carcerality in behavioral engineering and individualization? The imaginary shifted, and politics shifted. A Senate subcommittee investigated behavior modification methods in 1974, for example.

How do we shift the imaginary again? And just as importantly, of course, how do we shift the reality? How do we design and adopt ed-tech that does not harm users?

First, I think, we must recognize that ed-tech does do harm. And then, we must realize that there are alternatives. And there are different stories we can turn to outside those that Silicon Valley and Hollywood have given us for inspiration.

I'll close here with one more story — not a piece of the ed-tech imaginary per se, but one that maybe could be, something that points towards possibility, something that might help us tell stories and enact practices that are less carceral, more liberatory. In her essay "The Carrier Bag Theory of Fiction," science fiction writer Ursula K. Le Guin offers some insight into the (Capital-H) Hero and (Capital-A) Action that has long dominated the stories we have told about Western civilization, its history and its future. This is our mythology. She refers to that famous scene in another Stanley Kubrick film, 2001: A Space Odyssey, in which a bone is used to murder an ape and then gets thrown into the sky where it becomes a space ship. Weapons and Heroes and Action. "I'm not telling that story," she says. Instead of the bone or the spear, she's interested in a different tool from human evolution: the bag — something to carry or store grain in, something to carry a child in, something that sustains the community, something where you put precious items that you will want to take out later and study.

That's the novel, Le Guin says — the novel is the carrier bag theory of fiction. And as the Hero with his pointy sticks tends to rather silly in that bag, she argues, we've developed different characters instead to fill our novels. But the genre of science fiction, to the contrary, has largely embraced that older Hero narrative.

If science fiction is the mythology of modern technology, then its myth is tragic. "Technology," or "modern science" (using the words as they are usually used, in an unexamined shorthand standing for the "hard" sciences and high technology founded upon continuous economic growth), is a heroic undertaking, Herculean, Promethean, conceived as triumph, hence ultimately as tragedy. The fiction embodying this myth will be, and has been, triumphant (Man conquers earth, space, aliens, death, the future, etc.) and tragic (apocalypse, holocaust, then or now).

I'd say that this applies to science and technology as fields not just as fictions. Think of Elon Musk shooting his sports car into space.

As we imagine a different path forward for teaching and learning, perhaps we can devise a carrier bag theory of ed-tech, if you will. Indeed, as I hope I've shown you this morning, so much of the ed-tech imaginary is wrapped up in narratives about the Hero, the Weapon, the Machine, the Behavior, the Action, the Disruption. And it's so striking because education should be a practice of care, not conquest. Knowledge as a bag that sustains a community, not as a cudgel. Imagine that.

23 Jun 19:30

Ist Volkswagen das neue Nokia?

by Volker Weber

nokia-n97-872907349.jpg

Wenn ich die News zu Volkswagen lese, muss ich immer häufiger an Nokia denken. Nokia konnte alles: Handys in großen Stückzahlen weltweit in vielen Varianten produzieren, neue Produkte entwickeln, Preise und Features differenzieren, über alle Carrier der Welt absetzen. Das Unternehmen hatte einen dominanten Marktanteil und schien unsinkbar.

Was Nokia nicht konnte, war Software. Das N97 war die Antwort der Finnen auf das iPhone. Und es markierte den Anfang vom Ende.

23 Jun 19:30

Extract data from a plot in a flat image file

by Nathan Yau

Maybe you’ve seen a chart and wished you could look at the data yourself. Maybe you want to see it from a different angle. But the underlying dataset is nowhere to be found. The WebPlotDigitizer by Ankit Rohatgi lets you load an image and it will attempt to pull out the dataset. Amazing.

I can’t believe this has been around since 2010, and I’m just now hearing about it. [via @jburnmurdoch]

Tags: datasets, image

23 Jun 19:28

Summer Evening on Seaside Greenway, Southside of False Creek

by Sandy James Planner

CEFCC0D8-A264-4FE3-9C71-D617B5032C91

CEFCC0D8-A264-4FE3-9C71-D617B5032C91

It was just beautiful weather for a walk on the Seaside Greenway on the southside of False Creek between the Granville and Cambie Street bridges. The City of Vancouver has implemented improvements here to separate people using the walkway from the folks cycling through.  The city has used a strip of concrete to provide bifurcation guidance, and placed benches, lighting and signage to indicate which side is to be used based upon your mode of transportation. This section is north of Millbank Lane.

91143227-4367-4C5C-A597-B726131C872C

91143227-4367-4C5C-A597-B726131C872C

The walking section  of the greenway next to the water was extremely busy and at times physical distancing was a challenge. Cycling traffic was fast and light in terms of volume.

729DF740-9B3A-4BE8-916E-B201883E29FF

729DF740-9B3A-4BE8-916E-B201883E29FF

A report to Council in 2016  identifies this section of seawall as being highly used, but at that time quite narrow. In this approved plan walkers and cyclists were separated, uneven surfaces removed, and the area revamped to make access easier with an All-Ages-and-Abilities target for cycling.

The City’s report states that this section of seawall is a regional as well as a local destination and in one study found 2,000 to 3,600 people walking and 1,800 to 2,500 people riding bicycles in a twelve hour study. But on a sunny warm evening, would it have worked better if the bollards could be shifted to allow more people room for walking and physical distancing in this pandemic time?

 

0674AF59-0C02-466D-B374-19DDDB2EEFDD

0674AF59-0C02-466D-B374-19DDDB2EEFDD
22 Jun 03:38

Recommended on Medium: The App Store Debate: A Story of Ecosystems

Debate/discussion/rants about app stores (or perhaps The App Store) have rapidly polarized to the point where it seems difficult to have a rational discussion.

This is an annotated twitter thread. Throughout I have added some additional context based on the discussion including: the rationality of ecosystem actors, ecosystem optimization, most favored nation status, and what’s a fair “take.”

Debate/discussion/rants about app stores (or perhaps The App Store) have rapidly polarized to the point where it seems difficult to have a rational discussion. Even trying to discuss is viewed as a defense. A discussion without defending. The situations are similar, really. 1/

I would defend the value app stores brings, especially based on my experience detailed here. I’m not here to defend any specific rule or element of the Apple App Store. That’s for them to do.

2/ Mostly I want to talk about the problems app stores in general solve and how that relates to a rather precedent setting document, the Windows OEM Preinstallation Kit (OPK). OPK set forth the rules to be followed when a PC maker “bought” Windows.

Cover of the Microsoft Windows 95, OEM Preinstallation Kit guide (taken from DOJ case)

3/ Much of the DOJ v Microsoft antitrust case was perceived to be about browsers or even “bundling” but in fact it was really about the terms and conditions that came with selling a Windows PC. The regulation that followed was much more about that.

US v Microsoft DOJ Consent Decree outlining the 6 restrictions around selling Windows.

The case was brought starting with browsers, but also with the idea of Windows and Office coming from the same company (remember split Microsoft), and servers and middleware (Java). There was a lot of stuff. As @benedictevans says, the problems people bring do not always result in what the conventional wisdom says should be regulatory decisions. Regulators are going to regulate and what they see is a whole different thing, sometimes. In this case, they saw the root of the challenge to be how Windows was licensed to PC makers.

4/ Before this, in old old days, MS made software and PC makers paid essentially as distributors or “middle men.” They viewed MS as a component provider of BASIC. MS viewed them as licensee. There was no real retail/consumer market for software.

5/ Over time, as PCs all became IBM compatible PCs and the important software was MS-DOS the OS became more important to the marketplace. Making a DOS-compatible computer was cool. But challenging, as after a short time PCs started to look like indistinguishable commodities.

6/ As software people we always thought this was weird because we saw so many things we wanted from computers (faster, lighter, quieter, more ports, ….). OEMs decided that the best way to differentiate PCs was with software like this Sony. (Serious about making HW, add some SW?)

Example list of software included with a Sony laptop circa 2000 includes over a dozen packages, duplicates, etc.

This is from a Sony Vaio PC circa 2000. Think of how crazy it was to log on to this machine for the first time. You’d be blasted with offers to sign up for AOL, CompuServe, EarthLink, and Prodigy. Then MacAfee would kick in with an offer every two weeks. The VAIO support agent would ask you to register. And of course all of those Sony apps were loading monster amounts of code at boot slowing everything down and draining the battery. All of this in addition to a bunch of code that is there but not listed as “programs” because is simply gets the PC working—all the device drivers, helper apps, and so on. Sony were the best designed hardware and they viewed this software as essential to their experience. The conversations with them were always crazy because they thought the software helped to express hardware and we would say the hardware is great can you do less software. They were not being naive, dumb, or silly. They too wanted to make a great PC and just had a different idea. Of course the bounties paid by the online services and antivirus were real and that was part of the business. I use Sony as an example because they had the most differentiated hardware.

7/ As software product people we liked more software but the problem was added software was not thought of as a product — it seemed more like “what can we get cheaply/add to differentiate”. Some of this duplicated, interfered, or was just poor. WiFi selectors. USB drivers. Etc.

8/ Of course some was just to fill in gaps in DOS or to add features to DOS customers wanted. The product side of the the OS team just wanted the PC to work. Windows 3.x PCs were starting to become rather “frankenstein” in how they showed up at store — whole parts replaced.

One of the biggest examples of this was another lawsuit as well and that was MS-DOS disk compress (Stack lawsuit). This was an example of a third party building a tool and doing a bunch of stuff to MS-DOS / Windows that made the platform “unstable.” Microsoft then did its version of this but later. It was nasty. But OEMs wanted to ship this because hard drives were expensive.

9/ With Windows 95 MS began a concerted effort to define a “product” view of what could be installed. In exchange for the OEM price (v retail) PC makers had to follow rules about what could be installed. This was so customers knew something about a “Win 95 PC”.

10/ This seems fair and very focused on customers. Reviews of PCs had started to refer to bloated PCs that booted/ran slow because of all the stuff that was installed. From the product perspective we saw all the poor techniques these apps used.

Sample page from OEM OPK

There are only a few pages of OPK rules. This greatly expanded with each release of Windows. JonDe goes through some of his early experiences on Windows ( we joined at the same time) and providing the ground truth to OEMs. We dramatically simplified the OPK but we never went far enough. A thread.

One of the big observations in the above thread is how little data the OEMs had about PC performance. Something we began to do is share relative (anonymized) performance data so OEMs could see how they stacked up competitively. This translates into today’s discussion because something the platforms have is a huge amount of data that informs their strategy (what APIs are called, how many apps are purchased, how much each vendor sells, app quality and performance, and so on). Perhaps they could do more to share relative data. My experience was that all the OEMs thought they were doing great and better than all the other OEMs. This is just like we’re seeing today where every app maker is a good actor.

11/ OEMs, regulators saw things differently. They saw this as Microsoft getting in the way of freedom of a PC. OEMs should be able to do whatever they want after all they owned the customer. Certainly the product team thought the end-user was the customer. The business side 🤔

12/ Who was the customer? The OEM or the PC buyer? Ask the person with a broken PC and they almost always thought of this as a Windows problem. They would call Microsoft and through a complex phone tree ended up with Dell’s phone number good for 90 days from purchase.

13/ While Microsoft was paid for Windows, it was from OEM. So OEMs reasoned they could do whatever they want to Windows essentially acting as the customer would. But Microsoft viewed the OEM as a distributor and the end-user was the licensee. Really complicated in a legal sense.

14/ But no matter what, our mutual customers thought of this as a Windows problem. That was the product person view of the situation. It was enormously frustrating to see OEM PCs with flakey software or worse “first run” or “out of box” experiences through all those screens.

15/ Over each release of Windows the OPK got more complex because for every rule that was added, OEMs would sort of hack around to do what they wanted anyway. Btw, the OEMs learned they could get paid (not pay) to include software which totally altered their incentive.

16/ By hack I mean do things like run programs after initial setup that would hide Windows features or get around limits on number of programs by running another program to install many others. The PC+OPK became a game board and customers were in the middle pointing to MS.

I really can’t overstate the craziness that OEMs would go through to work around limits. It is sort of like all the ways advertisements try to trick the browser. At each juncture their point was “just get rid of the rule and we won’t need to do this” so to speak. The problem of course is that never ends so long as their is a rule.

A key factor to understand in an ecosystem is that the platform is common and therein lies the problem. A common platform means no one has an advantage. The only advantage one vendor has over another is to exploit the platform “better” than the next company. While most observing think that the right way to compete is to build a better version of what you’re building it is not how companies think of competing in general. They look at competitors who all too often are doing very similar things and then think how could they get a leg up. Maybe it is in how they can hack more usage, get an edge on performance, integrate with some OS feature better, and so on. Today we think of these as “growth hacks.”

This competitive spirit among similar partners in the ecosystem leads to two observations. First, all the partners in a category will gravitate to the same sort of “game play” and that will happen quickly. No matter how high-minded or principled a company might be, if their competitor gets an edge by a certain behavior then they will follow. Before the Windows capability to set the default programs and search engine, we saw the OEMs go through any number of hoops to have their (paid for) browser, search engine, media player, etc. become the default. (Note: the court case(s) ruled on this matter, strongly against Microsoft holding these as OS settings.) We are seeing this sort of behavior today in how all the video conferencing apps are finding ways to insert themselves into the calendaring flow of users, regardless of which calendar a customer is using.

Second, the reality of this behavior is not tied to the good actor/bad actor perspective. In the PC era the actors were the top 4 PC makers—these are iconic companies with decades of business leadership. These are not hacky startups or bot-like companies from remote parts of the world. These companies are the leaders in industry. The reality is this is how business works in a highly competitive environment. Even if a company says today they will not “stoop” to this level of work, it is just a matter of competition (not time or principles) before they have to in order to respond to customers and the market.

17/ Yet this whole time the distribution through OEM remained valuable. It was after all the value MS saw too — why not others? But who would be accountable for a PC? As a platform product customers tended to blame Microsoft. It didn’t help that Microsoft was profitable and big.

18/ Aside: I was on Office. We even got nailed for violating the OPK because of the “Office Shortcut Bar” which was a somewhat loved feature that was deemed to be “an alternate program launcher”. We didn’t have a big OEM business but some. Rules were adjusted.

Office Shortcut Bar

19/ @waltmossberg coined the phrase “craplet” to define this stuff. Conventional wisdom became that the only way to get a “good” PC was to do a “clean” install of Windows. Unfortunately most product people at Microsoft agreed and did that. We lost touch with customers.

Excerpt from review in Wall St Journal where Walt Mossberg used phrase “craplets”

20/ Of course all of this took a dark turn when the rules were viewed as ways to exclude competitive software from the PC. Cynical folks might have thought that the complaints about browsers were just a way to gain an upper hand for more craplets.

So basically what happend was that the idea of replacing parts of Windows was all well and good, though contentious, until the browser was part of Windows. Not everyone agreed that the browser was part of Windows or that that part of Windows should not be replaced.

It is important to note that the idea of “what is part of something” is a never-ending debate in platforms/ecosystems. Today people talk about being Sherlocked by Apple (of course most weren’t around for Sherlock). On the other hand, no matter what part of a platform you look at, there was a time when it was not “free” with the platform: mouse support, graphics for games, USB support, TCP/IP support, and, well, even support for a windowing system. In an open system people are always going to “add” to it (even if there’s a curated store) and sometimes those additions are also worthy of doing platform features. This too is very tricky, on both sides of the discussion.

21/ Things don’t really stop there though and this isn’t about the antitrust case. Much was being made about how much better Macs were than Windows. Fewer viruses, malware, etc. Everyone remembers “Get a Mac” tv ads. I do. So did JimAll running Windows.

Jim Allchin's Mac message: The full text

Get a Mac actors

22/ PCs had long been the targets of viruses, but a new kind of software emerged — malware — thanks to the internet. Now the (crazy new) innocent step of doing a search for some software led to downloading and installing really bad software.

23/ Trying to watch that BitTorrent clip of a movie or trying to make your game frame rate increase or create a free PDF? Search and download then your PC is just a couple of clicks from being pwned, or just forever slow and flakey. Ugh.

These benign searches used to take customers to download sites where they might occasionally get the right tools. Soon though these attacks were two-pronged. First bad people would simply post evil versions of these tools and catch downloaders. Second they would seed the net with content that required a viewer and get people that way. The world is an evil place. Do not ever google an ogg vorbis player.

24/ I know most people reading this would say this does not happen to them or they know better or just give them control and they can prevent it. This is not about techies. It is about normal people who get duped into these problems. Millions of times. Again and again.

25/ The problem isn’t customers or even distribution. The problem is that any platform at scale is going to became something of a hacking surface — a game board to be exploited. That’s really unfortunate but just is. The platform rules (APIs or contracts) form the board. //Cont’d.

26/ Of course most everyone reading this has two reactions to that. First, they don’t think they are suceptible to these tricks and can recover. Second, as a developer they would never exploit a platform this way and only try to solve customer problems.

27/ That gets back to OEMs. These are very customer focused companies looking to build products for their customers. Yet somehow for whatever reasons they end up shipping software that doesn’t always make a platform shine and in some cases even degrades the experience.

It is really important to understand two things about the story I am trying to tell. First, everyone in the the entire “system” believes they themselves are acting in good faith and working to build great products. No one is going to “exploit” the platform or do bad things or make the platform look bad. Likewise the platform has no interest in shutting out developers or making things difficult for them. Everything everyone is doing is to make the world a better place.

Second, and this is a but, everyone is also only acting locally. No one is really globally optimizing the customer experience. There’s no global usability test for the system. There is not an optimal solution that everyone us going for by using Goal Seek in Excel. In the end, each company/organization or even team within a big company is acting in their best interest. Yes, this is one giant collection of org charts shipping code.

This means that even though everyone is well intentioned, there is room for the platform to say “well maybe we need to be careful” or “maybe we should just require this”. From my past an example is pushing people for signed device drivers. Seems silly, but no one company saw this as something useful or interesting to do. The platform (OPK) requiring it was a tax on the system. It raised costs and reduced agility. It was even viewed as something to keep people focused on Windows. And all we wanted to do was reduce malware.

28/ Over time the OPK rules became more complex because no matter what, each OEM wanted to have a competitive edge over the other OEM. That is not said to judge in any way. It is simply how a game board works, especially over time.

29/ Many thought there should just be no rules. We tried a world of no rules and PCs essentially became unsafe/unusable. We routinely tested battery life or boot time of an OEM install v clean PC and the clean PC trounced the OEM PC. Customers still thought “Windows PC”.

30/ When Smartphones came along with app stores it was abundantly clear to me (and many others) this was a solution to having a positive customer experience and better maintaining that experience over time. Stores curating apps to weed out exploitive use cases were a win.

31/ Even better it solved an age old problem of how can customers even find software reliably (versus a search engine). In other words, the stores solved for quality and distribution at the same time. Distribution costs money, lots. Now the platform produced customer leads.

32/ Here we are today though having perhaps recreated the OEM problem except instead of hardware makers there are many software developers who look at the rules, look at what they want to do, and decide they have a better way. Like OEMs though…

33/ the question really is if the many parties that come to deliver the customer experience are aligned? Can they be aligned? Clearly no one wants to go back to a free for all (no matter how many techies say bad things don’t happen to them).

34/ All the good that stores bring is all of a sudden being looked at through a lens of trying to accomplish things that may or may not be the case. Many times through the DOJ trial the question turned to intent and whether it was pure or not.

35/ Some specific instances were proven to be, at least in the eyes of important actors, exclusionary. But most everything in the OEM OPK was rooted in trying to keep a Windows PC experience from being really crappy. My Office shortcut bar was kind of crappy on W95.

36/ The question is really about customer experience and maintaining a value prop. Were the major PC makers bad actors? Microsoft didn’t coin the phrase craplets. Well intentioned people can do things that muck stuff up for many.

37/ Obviously no one reading this is a bad actor. Obviously platform makers have no vested interest in doing things that make the platform “hostile” to more developers. Everyone says they are doing right by customers from their perspective. Yet here we are (or were).

38/ Today we’re hearing all the examples of good actors who say they want to do good things. We’re not hearing from bad actors who would have destroyed the smartphone experience — maybe even prevented the app revolution from happening.

39/ Imagine a world of apps that simply exist to steal your credentials or swipe information from a device? Imagine apps that interfere with other apps or simply reduce battery life. Imagine an app store filled with apps that simply render web sites.

40/ I know some will take each rule and say it isn’t needed or goes to far, like OEMs w/OPK. My point isn’t to defend status quo. It is to point out that this is very difficult and there are huge benefits. The discussion is nuanced, at least given what I experienced. // END

In the discussion on twitter two comments came up a couple of times I wanted to offer a thought on.

One, is why not do something about the “30%”? Well this thread was not about Apple and I don’t really have an opinion. But what strikes me is how much that 30% adds up to. @Asymco modeled some numbers http://www.asymco.com/2020/06/20/subscribe-again/ and concluded that the app store resulted in about 3.8% of the total base of revenue for services. Interestingly, that’s about the percentage cost of Windows for a typical consumer PC (most people and most OEMs think that percentage is much higher.) I don’t know what that coincidence means but found it interesting when I saw the number.

I will say there are many ways to think about the percentage. But one thing I know for sure is that the percentage is not the biggest issue. It is the easiest issue to see. It might also be the easiest to address. It reminds me of that crazy desktop icon for Internet Explorer. It seemed like such an easy issue to address but it was also a princple. So we’ll see what happens.

Related to that was the idea that many people asked about why we didn’t allow or support “good OEMs” to do things. This is super interesting and real sore point for the Windows effort. As it turns out, the OEM business allowed all sorts of things for the bigger better customers. As you can imagine, better customers get better treatment—that seems reasonable. So in particular on pricing the bigger OEMs got better deals. On the other hand, the more PCs someone shipped the more we cared about the technical stuff. But because of the scale we might agree to let a test of some PC take place in one market to see how an exception would go or maybe one low end cheap PC could bend the rules. That kind of stuff. The DOJ consent decree did away with all that and provided for uniform terms for all customers. So when people ask for “most favored nation” systems be careful because that draws a lot of attention when a company gets big and dominant.

That’s enough for this thread. I suspect there’s more questions and comments so please join on twitter.


The App Store Debate: A Story of Ecosystems was originally published in Learning By Shipping on Medium, where people are continuing the conversation by highlighting and responding to this story.

22 Jun 03:37

J.B. Handley’s unthinking person’s guide to the COVID-19 pandemic – Science-Based Medicine

mkalus shared this story from Science-Based Medicine.

One of the more bizarre things that have happened related to the COVID-19 pandemic is the way that antivaccine activists have formed an unholy alliance with COVID-19 conspiracy theorists. On the other hand, it might seem bizarre to those not familiar with antivaccine pseudoscience, but it actually makes perfect sense to those of us who have been following the antivaccine movement for a long time, for the simple reason that all antivaccine pseudoscience is based on conspiracy theories. So it isn’t much of a surprise that someone I’ve been following for a very long time (16 years!) has gone all-in on COVID-19 conspiracy theories. I’m referring to J.B. Handley, the man who, with his wife, founded Generation Rescue, an antivaccine group dedicated to the now refuted hypothesis that mercury in vaccines causes autism. When last we met him three years ago, he was once again attacking vaccine science and spewing his usual antivaccine misinformation. Before that, he was known for misogynistic attacks on female journalists writing about antivaxxers and attacks against defenders of science in general. These days, he’s posting his pseudoscience on the website of Robert F. Kennedy, Jr.’s antivaccine group Children’s Health Defense in an article entitled “LOCKDOWN LUNACY: The Thinking Person’s Guide.” It’s useful to examine, because it’s basically a cornucopia of COVID-19 misinformation, disinformation, and conspiracy theories.

Handley’s article is constructed based on what he calls “facts.” As is so frequently the case with articles of this sort, his sixteen “facts” are a mixture of facts (deceptively presented), nonsense, and “sort-of” facts that are partially true, all used to cast doubt on the conventional narrative about COVID-19. All of them are basically cherry picked claims. Before I dig in, let me just mention something about J.B. Handley. He is not a scientist of any sort. He’s a businessman who co-founded and co-managed Swander Pace Capital, a private equity firm, until retiring in early 2014. He has no background in science, immunology, autism, infectious disease, vaccines, or, of course, COVID-19. Moreover, he’s known for his admiration of antivaccine icon Andrew Wakefield, whom he likens to “Nelson Mandela and Jesus Christ rolled up into one.” None of that stops him from engaging in his usual nonsense of the sort that I’ve been commenting on since 2005. So, with that history in mind, let’s dig in. Here’s his first “fact”:

Fact #1: The Infection Fatality Rate for COVID-19 is somewhere between 0.07-0.20%, in line with seasonal flu.

The Infection Fatality Rate math of ANY new virus ALWAYS declines over time as more data becomes available, as any virologist could tell you. In the early days of COVID-19 where we only had data from China, there was a fear that the IFR could be as high as 3.4%, which would indeed be cataclysmic. On April 17th, the first study was published from Stanford researchers that should have ended all lockdowns immediately, as the scientists reported that their research “implies that the infection is much more widespread than indicated by the number of confirmed cases” and pegged the IFR between 0.12-0.2%. The researchers also speculated that the final IFR, as more data emerged, would likely “be lower.” For context, seasonal flu has an IFR of 0.1%. Smallpox? 30%.

Here, I like to cite Carl T. Bergstrom, a professor of biology, whose Twitter feed is essential reading (if you’re on Twitter) in this era of COVID-19. He notes that this claim is false, both in its estimate of how low the COVID-19 infection fatality rate is and on what the infection fatality rate is of typical seasonal flu. For instance, Handley cites John Ioannidis’ much-criticized (and rightfully so) seroprevalence study carried out in Santa Clara County, California that claimed to have found that the number of people who’d been infected with SARS-CoV-2 in the California county of Santa Clara was 50 to 85 times higher than previously thought, elevated numbers that suggested that the vast majority of COVID-19 were milder than previously thought and that the infection fatality rate was much lower than previously thought. The problems with this study are summarized here, here, and here, but the bottom line is that it examined a biased sample. As Bergstrom notes, the best estimates range of the infection fatality rate of COVID-19 range from 0.5% to 1.5%:

Unsurprisingly, Handley also cites Dr. Scott Atlas, a Fellow at the Hoover Institution of Stanford University, a well-known conservative think tank known for downplaying the severity of COVID-19 and opposing lockdowns as a strategy to slow the spread of COVID-19. One notes that Dr. Atlas is a neuroradiologist and has no particular expertise in infectious disease or epidemiology. His entire shtick is to argue that COVID-19 is no big deal, that the risk of dying from it is so low that radical measures to stop it are not indicated.

Next up:

Fact #2: The risk of dying from COVID-19 is much higher than the average IFR for older people and those with co-morbidities, and much lower than the average IFR for younger healthy people, and nearing zero for children

This is true, but the proper response to this is: So What? Handley basically uses these data to argue that it’s only old people who are dying; so closing schools makes no sense. There is an argument to be made that, because children are much less likely to become severely ill from COVID, school closures should be reconsidered, there is still a lot of uncertainty in this area. He cherry picks a single study suggesting that schools in Ireland are not a driver of COVID-19 spread, and runs with it. At least he concedes that older teachers and school employees might have something to fear.

Handley goes on:

Fact #3: People infected with COVID-19 who are asymptomatic (which is most people) do NOT spread COVID-19

As is Handley’s usual practice, he cherry picks a single case and a single study. There is actually abundant evidence that presymptomatic or asymptomatic individuals can spread COVID-19. For example, Eric Topol and Daniel Oran reviewed the evidence by examining a number of cohorts with COVID-19, finding that, yes, around 40% of COVID-19 cases are asymptomatic but also finding that asymptomatic patients can drive transmission:

Asymptomatic persons seem to account for approximately 40% to 45% of SARS-CoV-2 infections, and they can transmit the virus to others for an extended period, perhaps longer than 14 days. Asymptomatic infection may be associated with subclinical lung abnormalities, as detected by computed tomography. Because of the high risk for silent spread by asymptomatic persons, it is imperative that testing programs include those without symptoms. To supplement conventional diagnostic testing, which is constrained by capacity, cost, and its one-off nature, innovative tactics for public health surveillance, such as crowdsourcing digital wearable data and monitoring sewage sludge, might be helpful.

While it’s not entirely clear if completely asymptomatic COVID-19 patients who never develop symptoms transmit COVID-19 less frequently than presymptomatic people (those who ultimately do go on to develop symptoms), there is little doubt that asymptomatic people, whether they do or don’t go on to develop symptoms, do transmit COVID-19.

Next up:

Fact #4: Emerging science shows no spread of COVID-19 in the community (shopping, restaurants, barbers, etc.)

I laughed at this one. “Emerging” science shows no such thing. Handley’s claim is based on interviews with a single German scientist in Business Insider and RTL Today. As Bergstrom notes, Handley confuses the absence of evidence for the evidence of absence, noting that community spread while shopping or doing other activities out in public is much harder to track down via contact tracing methodology than spread among co-workers, family members, and the like, because there is no easy way to figure out who was in the store or other facility at the same time and connect spread that way. We also know that spread by respiratory droplets is important, which supports the spread of the disease whenever humans are in close contact.

Handley then goes on to lie:

Fact #5: Published science shows COVID-19 is NOT spread outdoors.

In a study titled “Indoor transmission of SARS-CoV-2” and published on April 2, 2020, scientists studied outbreaks of 3 or more people in 320 separate towns in China over a five-week period beginning in January 2020 trying to determine where outbreaks started: in the home, workplace, outside, etc.? What’d they discover? Almost 80% of outbreaks happened in the home environment. The rest happened in crowded buses and trains. But what about outdoors? The scientists wrote:

All identified outbreaks of three or more cases occurred in an indoor environment, which confirms that sharing indoor space is a major SARS-CoV-2 infection risk.

Actually, the very study cited demonstrated that transmission outdoors is actually possible. Also, not surprisingly, Handley ignores studies that do show outdoor transmission of coronavirus. One such study (caution: on a preprint server) estimates that the chance of transmission of COVID-19 in a closed environment is 18.7 times greater compared to an open-air environment (95% confidence interval. It’s probably true that transmission is considerably less likely outdoors, but it’s nonsense to claim that COVID-19 is not spread outdoors.

“Fact #6” amuses me, given the topic of last week’s post:

Fact #6: Science shows masks are ineffective to halt the spread of COVID-19, and The WHO recommends they should only be worn by healthy people if treating or living with someone with a COVID-19 infection.

And:

Fact #7: There’s no science to support the magic of a six-foot barrier.

I covered both of these issues in detail last week. Masks work. Social distancing of 1-2 meters works. I don’t feel a compelling need to cover the same ground again here. Handley is, as is usual for him, full of crap on this issue. He even repeats old World Health Organization recommendations as though they were new. In fairness to Handley, the WHO hadn’t updated its recommendations when it was originally written, but oddly enough, he hasn’t mentioned that the WHO has changed its recommendations on wearing masks to recommend that people over 60 and people with underlying medical conditions should wear a medical-grade mask when they’re in public and cannot socially distance, while the general public should wear a three-layer fabric mask in those situations.

Next up:

Fact #8: The idea of locking down an entire society had never been done and has no supportable science, only theoretical modeling.

And:

Fact #10: The data shows that lockdowns have NOT had an impact on the course of the disease.

This is one of those claims that’s sort of true, but misleading. We haven’t faced a pandemic like that of COVID-19. I’m particularly amused, though, how Handley cites the WHO as not having a total lockdown on its list of pandemic mitigation measures as late as 2019. He cites a report entitled Non-pharmaceutical public health measures for mitigating the risk and impact of epidemic and pandemic influenza. He even reproduces this table:

Under “extraordinary measures”, workplace measures and closures are listed, as are internal travel restrictions. Those are basically the building blocks of our current lockdown. Unsurprisingly, Handley can’t help but blame it on the Chinese:

Obvious question: if there was no science to support a lockdown and we’d never actually done one before and many in public health said it would be a terrible idea, why did it happen? There’s really two answers as best I can tell. The first answer is that the World Health Organization, early on in the pandemic, chose to praise the Chinese response of locking down Hubei Province, which effectively served to legitimize the practice, despite the extreme limitations of data available to anyone about the Chinese lockdown’s actual effectiveness.

One can argue whether lockdowns go too far, but recent studies in Nature (ignored by Handley, of course) suggest that pandemic lockdowns are responsible for dramatically decreasing community transmission of coronavirus and having prevented tens of millions of infections and having saved millions of lives. In fairness, these studies were released after Handley’s article was published, but, again, Handley has updated his article but only with cherry picked material that (he thinks) supports his viewpoint. Oddly enough, he hasn’t updated it with these studies.

Handley also cites a report that COVID-19 was circulating widely in the US many months earlier than previously thought. This claim is not supported by molecular biology and has not been confirmed.

Handley also relies on a report from that noted institution with expertise in infectious disease and epidemiology, J.P. Morgan. Unsurprisingly, being a finance man himself, he thinks that this is a reason to take it seriously:

I’m going to start with a source that you might consider unusual, the global bank JP Morgan. Of all the facts I have covered, this one about the ineffectiveness of lockdowns has become the most politicized, because it’s being used to begin playing the blame game. JP Morgan, on the other hand, creates their analysis to do something very nonpartisan: make money. Their analysts crunch data to see which economies are likely to restart first, and you shouldn’t be surprised at this point to discover three things: 1) the least damaged economies are the ones that did the lest [sic] onerous lockdowns, 2) lifting lockdowns has had no negative impact on deaths or hospitalizations, and 3) lifting lockdowns had not increased viral transmission. Reading the JP Morgan conclusions is profoundly depressing, because here in the U.S. many communities are STILL being put through many different lockdown mandates, despite overwhelming evidence to their ineffectiveness. Consider this chart from JP Morgan that shows “that many countries saw their infection rates fall rather than rise again when they ended their lockdowns – suggesting that the virus may have its own ‘dynamics’ which are ‘unrelated’ to the emergency measures.”

Yes, because J.P. Morgan is out to make money, Handley considers its views more credible than those of scientists. Never mind that the report wasn’t public. Never mind that the data used to come up with the conclusions were never made public. Never mind that the report was not peer-reviewed. Personally, I like this retort the best:

Now, let’s go back to…

Fact #9: The epidemic models of COVID-19 have been disastrously wrong, and both the people and the practice of modeling has a terrible history

While many disease models have been used during the COVID-19 pandemic, two have been particularly influential in the public policy of lockdowns: Imperial College (UK) and the IHME (Washington, USA). They’ve both proven to be unmitigated disasters.

Handley attacks Neil Ferguson for previous estimates that turned out (to him, at least) to be overestimates, such as for Mad Cow Disease, but it’s rather bizarre. Ferguson’s estimate had a huge error range, and the actual number of deaths from the disease fell in that range. You can argue that such huge uncertainty in the estimates make the estimates rather useless, but Ferguson wasn’t exactly wrong. Handley also cherry picks the worst case scenarios estimated by Ferguson for deaths from bird flu in 2005 and H1N1 in 2009 and concludes that, because the ultimate numbers of deaths were far lower than worst case scenarios, that Ferguson is not to be trusted.

In any event, the Imperial College model for COVID-19 did indeed estimate that there would be 2.2 million US deaths if the pandemic proceeded to herd immunity uncontrolled and 1.1 million US deaths if it went through to herd immunity with controls in place. Bergstrom notes that we’ve put massively intrusive controls in place but have had nearly 120,000 deaths anyway, while only around 5% of the population has been infected. At that rate, if 50% of the population goes on to be infected, we could easily top one million deaths.

Handley also attacks the IHME model, which has a lot of problems, correctly pointing out that it performed poorly. However, as Bergstrom notes, the model actually underestimated the number of US deaths.

Onwards:

Fact #11: Florida locked down late, opened early, and is doing fine, despite predictions of doom.

At the risk of relying too heavily on Bergstrom:

Next up:

Fact #12: New York’s above average death rate appears to be driven by a fatal policy error combined with aggressive intubations.

This is partially true. It is true that COVID-19 took a huge toll in nursing homes in New York. However, the bit about “aggressive intubations” is a misrepresentation of a controversy in how to manage COVID-19 patients requiring mechanical ventilation that turned into a conspiracy theory that “ventilators are killing COVID-19 patients”. Basically, way back in early April (which, these days, seems like ancient history), an emergency medicine doc named Dr. Cameron Kyle-Sidell produced a YouTube video in which he questioned how ventilators were being used to treat COVID-19 patients. His concerns were mainly that doctors were too fast to place patients on a ventilator and that they were using ventilator settings for acute respiratory distress syndrome (ARDS). One of the key characteristics of ARDS is that the lungs become noncompliant (stiff) as part of the inflammatory process that impairs their ability to exchange oxygen. Consequently, high ventilatory pressures are often needed, specifically positive end expiratory pressure (PEEP), the pressure at the end of expiration, which helps keep the alveoli (air sacs) open.

Although Dr. Kyle-Sidell’s video was treated as though it were a shocking revelation that proved that doctors don’t know what they’re doing when it comes to treating COVID-19, in reality what he was saying wasn’t anything that radical at all. It also seemed to reveal an ignorance of how COVID-19 was actually being treated in ICUs at the time. Dr. Rohin Francis wrote a great article on MedPage Today entitled “The Great Ventilator Fiasco of COVID-19“, where he noted that the “very core principle of ventilating a patient is to reduce oxygen and pressure being delivered as much as possible. ITU [intensive treatment unit] nurses are experts at doing exactly this and it’s been an absolute fundamental of management for decades.” Basically, Handley doesn’t know what he’s talking about here and is mixing a legitimate criticism of New York’s handling of nursing home cases of COVID-19 with nonsense about ventilators killing patients.

Next up:

Fact #13: Public health officials and disease epidemiologists do NOT consider the other negative societal consequences of lockdowns.

I’m with Carl Bergstrom here. This one is straight up nonsense. Public health officials frequently consider these issues, but they do so in the context of modeling disease.

Handley can’t stop, though:

Fact #14: There is a predictive model for the viral arc of COVID-19, it’s called Farr’s Law, and it was discovered over 100 years ago.

Farr’s law basically states that the number of cases per day in viral outbreaks follow a bell-shaped curve. The problem is that Farr’s law isn’t always accurate. For instance, using Farr’s law to predict the the size of the AIDS epidemic resulted in a massive underestimation of the true scope of the epidemic. It also turns out that the IMHE model that Handley attacked used Farr’s law to model the size of the COVID-19 outbreak in the US and failed badly by underestimating it.

Next up, a prediction, not a fact:

Fact #15: The lockdowns will cause more death and destruction than COVID-19 ever did.

This is a common talking point among COVID-19 deniers. Of course, the correct comparison is not the current lockdown versus the pre-COVID-19 pandemic situation, but rather what would have happened if there were no lockdown and coronavirus were allowed to run unchecked through the US population. Unsurprisingly, Handley cherry picks quotes from some lockdown opponents that don’t take into account how much reduced economic activity there likely would have been due to behavioral changes among Americans in the setting of an uncontrolled pandemic.

Finally:

Fact #16: All these phased re-openings are utter nonsense with no science to support them, but they will all be declared a success.

This, too, is a prediction and not a fact. It remains to be seen if this will be true. He continues:

Yup, still waiting for your Phase 1 or Phase 2 re-opening? Trust me, whomever conjured up your state’s plan is quite literally making things up as they go along. And, given the extreme range of plans taking place—even in neighboring counties—the odds that they have ANYTHING to do with the arc of the virus is exactly ZERO, but you already knew that if you read this far. The good news is they will ALL succeed, because we never needed to lockdown in the first place—MISSION ACCOMPLISHED.

I’ll give Handley some credit. For some states, he’s not entirely wrong. That doesn’t excuse the mass of nonsense that preceded his last “prediction”. On the other hand, he is wrong when it comes to many states, including my own state of Michigan.

I first “met” J.B. Handley over 15 years ago, when he was the founder of Generation Rescue, an organization that promoted the discredited idea that mercury in the thimerosal preservative that used to be in several childhood vaccines caused autism. It’s the same organization that Jenny McCarthy went on to become president of. I guess I shouldn’t be surprised that in the era of COVID-19 he’s still promoting pseudoscience.

22 Jun 00:56

That Syncing Feeling

by Rui Carmo

The main reason I ended up not cancelling my Dropbox subscription last year was awkward, and is simultaneously also the reason why this post has been an entire year in the making.

But I hope it’s worth it to pick up again and expand upon, because this time maybe (just maybe) my notes on this topic can be useful.

The reason I am doing so is that after reading Nikita Prokopov’s post I decided to revisit the issues I had with getting Dropbox and OneDrive to work for me, since despite having used Dropbox since the very beginning I started using OneDrive for a lot of my personal stuff even before I joined Microsoft (obligatory Disclaimer).

And over the years, I have grown more and more disillusioned with Dropbox and its ever-increasing bloat, and yet couldn’t find the right fit even as I got more and more into using OneDrive.

SyncThing, which I was only tangentially considering last year, now looks like it actually make a lot of things much easier for me–but, alas, not everything, since it has a few fatal flaws that Nikita didn’t cover in his piece.

The Problem With Dropbox

But let’s start with the compelling event for all of this–Dropbox‘s increasing bloat and lack of focus on why people started using it in the first place.

Last year I summarized things like this (but never got around to post it):

Dropbox is a resource hog that has completely lost the focus on what is its most critical function–simple, transparent, zero-hassle file sync. Over the past couple of years, they went on a “value added” rampage that hiked prices (doubling storage even if you had no use for it), emphasised dubiously useful features, put the Linux client on life support and even shipped an entirely new, unremovable file manager across the board, without fixing any of the gripes people have had with it for many years.

If I could, I would ditch it and move on.

And a year later, all of it is still true:

  • There is still no personal plan below 2TB (I am using 150GB, give or take, and paying for a massive amount of storage I don’t use).
  • It still captures file change events outside its own folder, hogging the CPU whenever I do any significant amount of I/O outside the Dropbox folder.
  • The menu/taskbar app has taken up ever-increasing amounts of RAM because it started shipping with its own Chromium runtime to render a UI I never use.
  • The added bloat provides zero usable functionality inside its massive pop-up, which now also includes an ad to get me to use it on mobile even though Dropbox has to know I have mobile devices linked to my account.
  • Adding insult to injury, Dropbox has enforced a three-device limit for free accounts (so they have to know how many and what devices people use.).
  • Besides its intrusive Office plugin (which I took pains to disable) it now requires installing a system extension to do Smart Sync (i.e., on-demand access to offline files), which is a bit too much system access than I’m willing to grant.

And now they’re “adding value” with calendar syncing, a password manager, and goodness knows what else, when I need exactly none of those things (not even Smart Sync).

I don’t even need the Dropbox client on iOS–all I need is the file provider, which works very, very well (except just after an update, where I need to re-open the Dropbox app to get it working again for some reason).

Things Were Great When They Were Simple

Again, I had a great experience with the “core” Dropbox functionality for many years. I even used it to directly sync this website’s content from my machines (which was awesome for posting instantly), but it became so much of a hassle to keep the Linux daemon running that I now post things via git instead.

That was a challenge at first since the current repository has 2GB of content alone, but thanks to Anders Borum’s amazing Working Copy I can actually post link blog entries via Siri Shortcuts and have a pretty decent authoring experience.

For work and personal projects, I’ve been working with full-blown git repositories inside Dropbox for years now, and Dropbox has been rock solid in terms of speed, flexibility, reliability, and platform support.

I only had a couple of issues where conflicts or sync lag caused problems–the risks of having those were far outweighed by the ability to switch machines and have everything right there without having to git pull from slow (and sometimes hard to reach) internal repositories.

I could hack on something from my laptop, move a couple of floors, and pick things up again from my desktop without any friction–even if my laptop only picked up Wi-Fi for a portion of that time.

And then they decided to hobble the Linux client, and I decided to start looking for alternatives in earnest.

Why I Cannot Migrate To OneDrive

The reason I didn’t write more about this last year was that I tried migrating everything to OneDrive–and failed.

And I failed because my primary (personal) use case for file syncing is across two Macs and a few Linux machines, and it’s just not suitable for that.

This Is Fine

It bears mentioning at this point that (besides using it every day for work, on Windows) I’ve been using Microsoft 365 Family (which used to be called “Office 365 Home”) for almost seven years (one more and change than I’ve been at Microsoft).

And it provides 1TB of OneDrive storage for each of its up to 6 users–which is still much more than I need to sync across machines, but effectively free when compared to Dropbox since in my mind I’m paying for the Office app licenses (across macOS and iOS) and not so much the associated services1.

OneDrive works great for my family (the kids use it to store and share homework, etc.), but even after five years at Microsoft, I have exactly zero personal Windows machines except for a VM on my KVM box for running very specific software.

This may be atypical, but I’m not made of money, and I do (for the moment) prefer using a Mac for my personal projects (although to be honest that is looking like a dicier proposition with every macOS release).

The Mac Experience

As it turns out, OneDrive on the Mac is a far cry from the Windows version. It works mostly OK with Office files of any size and with relatively small numbers of files, but is completely unusable with many hundreds of thousands of small files.

And, guess what, what I do on my personal machines (coding, mostly) involves the constant manipulation of… hundreds of thousands of small files.

Which is why last year I wrote:

Whereas Dropbox can effortlessly (well, with some CPU usage, but quickly and transparently) sync all of my git repositories and project trees (even the ones with many thousands of junk duplicates inside multiple node_modules folders), OneDrive chokes on a working set that is less than a tenth of my current project tree (around 60000 files, including the ones inside .git folders). It simply does not sync anything and spends hours “processing changes”.

Or, worse, it will occasionally give up halfway, complain that there are already multi-gigabyte top-level folders with the same names and ask me to rename them to continue syncing.

Well, it’s been a year, and it still has the same issues, and that means I still can’t use OneDrive on my machines for what I need.

Feature Imparity

Besides there being no official Linux client (more on that later), the Mac client lags behind the Windows one in many respects–sometimes annoyingly so. Even though it seems to share some of the Qt code base (I am also not a fan of its huge menu/tray pop-up on any platform, but it is vastly better than Dropbox‘s), it is a fundamentally different beast, and traditionally lags behind in features.

For instance (and this is a very dear example to me), the Personal Vault feature only works on Windows. I would switch to it in a heartbeat if it worked on macOS as well, for the very simple reason that I have kept an encrypted disk image on Dropbox to store critical documents since the very first day I started using Dropbox and would love to have a fully cross-platform solution for that.

But no, I can only use OneDrive‘s Personal Vault from iOS or Windows

Another dimension of feature parity is that it doesn’t handle native macOS files very well. For example, it won’t sync (sometimes even partially):

  • Extended attributes
  • Perfectly legal macOS filenames
  • Very long pathnames
  • Very large files (I have two VM images backed up on Dropbox that exceed the 15GB file limit)

All of these are known issues listed in this support page (which also includes variations for OneDrive for Business and SharePoint–but I’m only talking about OneDrive Personal here).

They might not be relevant for most users, but they are all blockers for me in one form or another, and during my failed migration attempt last year, I had a lot of issues with some older projects, to the extent where I just gave up and filed them away into disk images.

But to this day I still came up against filename and pathname issues whenever I am dealing with, say, a node_modules folder.

This was on Twitter, captioned "when you forget .gitignore", and is just as applicable here.

Honorable Mention

As to the Linux client aspect, abraunegg/onedrive is excellent for what it is (I had it installed on both my Linux laptops and a couple of Raspberry Pis to sync source trees), but it comes with a lot of caveats:

  • It is written in D (for some reason), so contributing to it (let alone compiling it) is much harder than you’d expect (but yes, I got it to work on a Raspberry Pi).
  • Lacks a lot of the niceties you get from Dropbox on Linux (it has no UI whatsoever).
  • Has trouble tracking local changes effectively, since it uses a mix of inotify and polling that ended up generating quite a few conflicts while I was trying it out.
  • It is not multi-threaded (still, in 2020), which effectively means it is several orders of magnitude slower than anything else (and that slowness also increases the chance of having conflicts).

It is pretty great if you have a few thousand files (or if you want to make periodic read-only snapshots from OneDrive), but on my huge working tree it would simply cause too many sync issues and be unusable on my slowest Linux laptop.

So much so that I had this (or a variant of it) right at the top of my .history on all of my machines for the next couple of months:

find . -name "*origin-Rui*" -type d -print0 | xargs -0 rm -rf

My Failed Migration Attempts

The conclusion I came to last year when I tried to move all my significant files from Dropbox to OneDrive before my Dropbox subscription renewed was that there was just no way I could get it to work for my personal projects–in particular, my Development folder and the hundreds of git repositories inside it.

Recapping what I already wrote above, I had never-ending issues with the amount, name, pathname depth, and sometimes even the kind of the files in question, to the point of OneDrive locking up the machine, failing to sync, or just crashing.

I tried everything from using gitbundle files to archiving stuff away, but gave up after a week or two of trying different ways to migrate things across (including diving into OCaml a bit and fixing a long-standing issue in the Unison file synchronizer, which I used to push some things across in lots).

That is not to say that there haven’t been changes and improvements in OneDrive.

For instance, about this time last year (mid-June, if I recall correctly), and after my first attempt at migrating, OneDrive shipped differential sync on the Mac, which sped up syncing large files considerably, as well as Files On-Demand (the equivalent to Dropbox Smart Sync), which I also tried to leverage.

Low-Level Weirdness

But Files On-Demand did not improve the situation–in fact, it would frequently get stuck (and, worse, get other applications stuck) to an extent where I was unable to force quit an application using a remote file.

It also didn’t work with CLI tools (which is sad, since OneDrive in Windows even works–albeit slowly–with the Windows Subsystem for Linux).

Worse, there were instances when I could not even restart my Mac without resorting to sudo reboot, since stuck applications waiting for Files On-Demand would also cause the Dock and the Finder to become stuck themselves.

So I stopped using it altogether, and haven’t turned it back on since.

OneNote Weirdness

Another unexpected thing that I came across was “losing” all my OneNote notebooks during another migration attempt.

This is due to a strange interdependency between OneNote and OneDrive–despite OneNote having effectively gone web-only and stopped storing local document databases in the filesystem (which I understand from a sync perspective), it kept some filesystem shims in place that are a bit of an annoyance. OneNote notebooks are represented in OneDrive as .url files that look like this:

[InternetShortcut]
URL=https://onedrive.live.com/redir.aspx?cid=XXXXXXXXXXXXXX&resid=YYYYYYYYYYYY&type=3

When you create or open a new notebook, you are placed into the OneDrive virtual filesystem to do so, even though none of your data is actually accessible via the filesystem itself. You can’t for instance, see individual notes nor figure out how much storage a notebook is taking.

This is broken in many ways, and I’ve come to live with it everywhere, on and off work. But the biggest annoying side-effect I’ve come across from this approach is that conflicts on these files cause your notebooks to be renamed, which means that notebook Foo every now and then becomes Foo-Conflict-PCBARBAZ because (you guessed it) that .url file got renamed automatically.

The Mobile Angle

Also, the second most critical feature I needed (seamless access from Files on iOS) worked unreliably at best–OneDrive occasionally takes hours to update the file provider view for some reason, and storing files from non-Office apps into OneDrive silently fails, with the new file not being uploaded at all for days2.

Good Enough (And Safer) For Most People

Moving wholesale to OneDrive was (and still is) a no-go for my particular use case.

But for “civilians”, it’s plenty good enough. You have to keep in mind that despite the failure at moving my projects across, I keep all my Office documents and PDFs in it, as well as massive chunks of data like the entire working set of family photos I’m (patiently) curating and a few video projects.

So if you’re reading this because of Dropbox feature bloat and are on a Mac, I suggest you go and get it from the App Store right now and give it a go.

Because, unlike Dropbox, it can run sandboxed and comply with all of Apple‘s security requirements, and I think that for most security-conscious people, that should be more than enough reason to use OneDrive instead of Dropbox (I could go on about data sovereingty and other things, but I do enough of that at work…)

So What Is This SyncThing… Thing Like?

Enter Nikita Prokopov’s post, and his unabashed praise of SyncThing‘s simplicity.

Which is well-warranted, because it works brilliantly for syncing the kind of data I need, although to be fair I’ve only thrown a few tens of gigabytes at it yet (one of which is the 2GB repository this site lives in).

It just ate it up and synced it across three machines, in mere seconds. Impressive stuff. I’ve had no issues for the past three days or so, so I’m now moving to pile up on more data on a daily basis.

I’m going to do that piecemeal and systematically test for the same kind of trouble I came across last year, because reverting this kind of thing is a major pain, and I’ve been there before.

But even without being able to claim any sort of success yet, here are a few highlights from my experience so far:

Things I Liked

  • Decent (if limited) desktop clients for both macOS and Linux that do the basics (show it’s running, animate a tray icon to show activity and let you easily get at the config).
  • “Magic” peer discovery/direct LAN sync (which is one thing at which Dropbox used to excel, and that OneDrive never did for me on the Mac).
  • Ability to specify in a sane way what should be synced, with a simple .stignore text file (which is much nicer than Dropbox‘s per-folder xattr -w com.dropbox.ignored 1, which is hidden away in a dark corner of the docs).
  • Trivial setup on my Synology NAS using linuxserver/docker-syncthing (there is a “proper” Synology package, but it runs under it’s own UID and I wanted more control).
  • Shared folders can be “send only” or “receive only”, which means my Synology can act as a mirror between machines and not have to bother with keeping track of local changes.
  • Versioning is configurable on a per-shared-folder basis (I have it switched off for most folders, since it really piles up when you’re doing development, but on for personal documents).
  • None of your data is stored anyplace you don’t control.

Things I Don’t Like

  • Lack of cohesion across desktop clients.
  • Having a web server (even if only bound to localhost). At least there’s an API key being exchanged with the desktop clients.
  • Having to do most things through the Web UI.
  • Playing around with keys and IDs (even if it does do local discovery, it’s not user-friendly enough for my tastes).
  • File versions are stored locally on the client.
  • None of your data is stored anyplace with automatic backups.

Things It Doesn’t Do

  • Folder sync indicators (of debatable use, but I’m used to them and it’s hard to know what’s been synced or not before closing your laptop).
  • Can’t easily toggle subfolder sync on/off (Nikita hates the Dropbox UX for that and I’m not a fan either, but it is useful and I used it a lot).
  • Work on iOS (at all). It will never happen with the current codebase and/or iOS restrictions.
  • Simple per-file or per-folder sharing with other people (which is a key feature of OneDrive, especially where it regards security and rich documents).
  • Sync macOS Extended Attributes.

And this last one is the current showstopper for me (well, at least for general use).

What it means is that your Finder tags and pretty important metadata get stripped when you drop “old school” macOS bundles into SyncThing (or OneDrive, but that’s besides the point now).

So (for instance) application bundles (which you shouldn’t be trying to sync anyway, but which are a great example), some project bundles and other innocuous-looking document formats like Mindnode mind maps (or anything from any native macOS application that stores its data into a folder sewn together with extended attributes) is going to break.

Permanently, because essential (meta)data is not going to be synced at all. Which is kind of understandable, but apparently not even on SyncThing‘s roadmap.

And out of all the “cross-platform” sync services, Dropbox is the only one that preserves (or at least doesn’t mangle) macOS extended attributes the way I need them to be handled. iCloud seems to do the right thing (it better, right?), but to be honest I don’t trust it enough yet to put really important files in.

This effectively means that your Documents folder has to be split up depending on what kind of data you have in it, which is impossible to manage from a project perspective. And even if you’re just talking about single files, it’s a pain to keep track of which apps’ data files can survive unscathed in which cloud service.

Conclusion (For Now)

The way forward (for me at least) seems to be splitting things up: get my development projects and massive archives off Dropbox and onto SyncThing, keep all my “Office” stuff, photos and shared folders on OneDrive, and have a small subset of them in iCloud (and pray the latter works at all).

My Synology NAS may come in handy here since besides acting as a private SyncThing hub it can sync to/from both OneDrive and Dropbox (admittedly slowly, as I’ve seen over the past 48h)3.

And it is quite appealing to do that even if it means I’ll have a single point of failure, because it might well be that I can set up just SyncThing on my personal Mac and Linux machines (and have only one piece of software hammering my hard disks).

Then I’d point my machines at each other, add the NAS to the group and have that act as a back-to-back relay between select SyncThing folders and both Dropbox and OneDrive, as well as backing up the whole thing to Azure storage.

That would allow me to keep using the iOS clients I need (and probably stick to the Dropbox free tier on one or two apps that only work with Dropbox) as well as letting me achieve my primary goal, which is to migrate the bulk of my data off Dropbox and onto something that does solely what I need instead of wasting CPU cycles with random frippery…

Hopefully I won’t take another year to go about it, although to be fair I might end up spending more time (and money) fighting Dropbox than paying for another year.

Ironically, I’d probably pay the same for a plain client that worked just like the original and ten times less disk space, with zero added features. Zero!


  1. Which I really wish had a “Premium” tier for private domains hosted anywhere else but GoDaddy, but that’s another story I’ll write about some other time… ↩︎

  2. I don’t know if this is still the case in the file provider since I now use iCloud for most “normal” documents I need to have with me on iOS, but given a quick test today with the OneDrive app that took almost 15 minutes to manually upload a simple text file using the share sheets, I’d say… yes? ↩︎

  3. Before you write in mentioning Synology Drive, it too tries to do too much, thank you. I do like that it has a iOS client, but it seems to be too limited. ↩︎


22 Jun 00:53

A Wholesome Post About A Toronto Legend

mkalus shared this story from FAIL Blog.

Well isn't this a wonderfully heartwarming and wholesome little post? Nav Bhatia's a 67-year-old legend who's been to every Toronto basketball game since 1995. And by the sounds of it, he's one of the most entertaining parts about the game. 

1.

Text - This is Nav Bhatia. You're going to see 1/9 a lot of him waving his towel on TV. He's 67 years old and still cheers louder than your teenage nephew. If you've ever been to a Raps game, you can't miss him. But here's what most people don't know:

2.

People - He's been at every single Raptors home game since 1995. That' right: Every. Single. One. Through Damon, Vince, CB4, a zillion coaches, blackouts, blizzards, you name it. Big deal, right? Wait, there's more. Bu MEMPHIS EASKE7BALL PTO 95 CAL

3.

Text - Bhatia came to Canada as an immigrant in the '80s with almost nothing. As a brown turbaned guy with an accent he couldn't get a job as an engineer, so he wound up working as a car salesman at dealership in a rough part of town. Devastating, right? Nope. Not for Nav.

4.

Text - He sold 127 cars in just ninety days. It's a record that stands to this day. He did it the old-fashioned way, by being honest (and yes, some catchy radio ads). He was so good that he eventually bought the dealership. Crazy, right. Guess what's crazier? 折

5.

Text - He later bought another. Back in 1995, when the businesses weren't doing well, he still bought season's tickets. They cost a lot, but he didn't care. He loved the team. Even those ugly purple dinosaur logos. He wore them with pride.

6.

People - If you go to a Raps game, you'll see his big ass huge goofy smile, on the baseline. When you're an immigrant, nothing feels more Canadian than waving a Canadian flag while cheering your team. Sports is the great equalizer. Heart RADIO

7.

People - You'll hear a dozen languages, see black guys in dreads hanging out with Korean guys eating poutine. In other cities, that would be weird. In Toronto, it's perfectly normal. It's how the 6ix rolls. Take this pic. How many colors/ ethnicities can you see? At least a dozen. NARTH SIDE E 7Ir NIRTH

8.

Text - When his dealerships started doing better, he could've called it a day. Instead, every year he spends $300K of his own money to send kids - mostly from brown, immigrant families - to Raptors games. He does it to show them they belong. #Raptors

9.

Baseball uniform - SUPER FAN

Submitted by:

22 Jun 00:52

The App Store Doesn’t Make Apps Safe

Another misconception about the App Store is that it makes apps secure and safe. It doesn’t.

There are things that do make apps safe. No matter how an iOS app is distributed, it runs in a sandbox. An app requires permission from the user to do things like access the address book or microphone. This is just how iOS works: it has nothing to do with the App Store.

The App Store review process probably does run some kind of automated check on the app to make sure it’s not using private APIs and doesn’t contain some kind of malware. However, this could be run as part of a notarization process — this doesn’t have to be tied to the App Store. (Mac apps outside of the Mac App Store go through a notarization process.)

Otherwise, App Store review is looking for basic functionality and making sure the app follows the guidelines.

As far as checking that an app doesn’t crash on launch — thanks? I guess? As for following the guidelines: the guidelines are about protecting Apple’s interests and not about consumers.

I would like to say that the App Store filters out bad behavior, but I don’t think it does. We’ve all seen various scam apps, and we’ve seen otherwise well-behaved apps do things like abuse the push notifications system.

It probably catches some egregious scams that we never hear about. I’ll apply the benefit of the doubt. But it didn’t catch that, for instance, Path was uploading the user’s address book. The community outside Apple catches these things, and Apple changes how iOS works so that these things can’t happen without user permission.

And, at the same time, the App Store is a magnet for scam apps. Even in a world where side-loading is possible, scam apps would stick to the App Store because that’s their best shot at getting users to stumble across them.

My grandmother

People have asked if I’d want my grandmother to download iOS apps outside the App Store. The answer is yes. That was how she downloaded her Mac apps, after all. (She was an avid Mac user.)

I’d feel secure knowing that the apps, just by virtue of being iOS apps, are sandboxed and have to ask for permissions. (I’m also imagining a Mac-like notarization step, for additional security. I think this is reasonable.)

In other words: Apple has done a very good job with iOS app security and safety. The fact that we think this has something to do with the App Store is a trick, though.

(I’m not arguing for getting rid of the App Store, by the way. I’m arguing for allowing an alternative.)

21 Jun 22:20

New Brunswick’s plan for COVID-19 contact tracing app thwarted by federal government

by Aisha Malik

New Brunswick’s plans for a COVID-19 contact tracing app have been foiled by the federal government, as reported by the CBC.

Premier Blaine Higgs says the provincial government had worked with the University of New Brunswick to create a mobile app to track the spread of COVID-19. However, it needs Google and Apple’s notification API.

To avoid a patchwork of apps, Apple and Google have restricted the use of their API to one app per country. The federal government is using the API for its own app.

Prime Minister Justin Trudeau announced on May 18th that the federal government is rolling out a nationwide contact tracing app. It is first being tested in Ontario, after which it will become available to all provinces.

Higgs says that this decision is concerning, because the New Brunswick government spent a lot of time developing its own app.

“We felt we could get up and running sooner. We felt we could get into a position and be able to supply a national app,” he told the CBC. “We all agreed … that it needed to be an app that would allow information sharing. It needed to be one that met all the criteria for privacy. And as long as it had that capability, then what difference did it really make?”

“But the federal government has chosen an app. They’re going to pay for it. I just hope we can expedite it as quickly as we felt we could otherwise,” he continued.

Ontario is expected to launch the voluntary ‘COVID Alert’ contact tracing app on July 2nd using Apple and Google’s ‘Exposure Notification System,’ which uses Bluetooth technology to share randomized codes with other nearby smartphones without identifying users.

Other smartphones are then able to access these codes and check for matches against the codes stored on devices.

If someone has tested positive for COVID-19, a healthcare professional will help them upload their status anonymously to a national network. Other users who have downloaded the app and have been in close proximity to them will be alerted that they’ve been exposed to someone who has tested positive.

The government has stressed that the app will be completely voluntary, and it is up to Canadians to decide if they want to download it, but that the app will be most effective if as many people as possible use it.

Source: CBC News

The post New Brunswick’s plan for COVID-19 contact tracing app thwarted by federal government appeared first on MobileSyrup.

21 Jun 22:18

Manchester Small-Scale Experimental Machine, the first stored-program computer, first ran on June 21st, 1948. pic.twitter.com/PHxLDQlUme

by moodvintage
mkalus shared this story from moodvintage on Twitter.

Manchester Small-Scale Experimental Machine, the first stored-program computer, first ran on June 21st, 1948. pic.twitter.com/PHxLDQlUme





234 likes, 82 retweets
21 Jun 22:18

WFH

by rands

Up at 7am. Check the outfit, the professional outfit, on the floor to make sure it still fits my mood. Adjust as necessary. Shower and put on the outfit. Consider wearing shoes, but don’t. My shoes are lonely.

Upstairs. Coffee (black) and a glance at the view. Santa Cruz Mountains. No fog today. Walk into the office and appreciate the lights are already on. A prior WFH project involved office automation. I never turn lights on or off during the work week. Sit down at the desk and spend five minutes on a tidy. This daily tidy keeps the desk a wide-open space save for a Zebra Sarasa .5 black gel pen, a Mile Marker Fields Notes, a box of Altoids, and a bright yellow cloth to clean my glasses.

Close to 7:30am now. Office is well-lit and tidy. Most days, meetings don’t start until 9am, but on Mondays, I give myself a ramp. Today is a Monday. No meetings until 11am so that I can cache the entire week. Examine the calendar for the entire week. Determine:

  • Meetings to decline (No agenda, no specified role for me, or historically low signal).
  • When preparation is necessary. Specifically, note said preparation in Field Notes.
  • WTF? Ask a human a question about the nature of the meeting in Slack. If they don’t respond in 24 hours, decline.

A sip of coffee. Another glance out the window. I purchased a bird feeder on Amazon to hang on the office window, but the birds haven’t found it yet.

Slack now, but music first. A playlist I’ve likely played before because it can’t be novel – it’s background noise — Thompson Twins for some reason. Slack triage is fast. Determine:

  • For direct messages, which need a response now or which can wait? Respond to critical messages and then move messages sans response to the correct sidebar (Fires, Hiring, Planning, and Grab Bag).
  • Once initial responses are done, scan sidebar for pre-existing messages that need a response. Respond. If it’s been there more than a day, reflect on why.

Three significant Slacks (Rands, Work, and Destiny) can be scrubbed in 15 minutes. Ok, mail now. The very achievable goal is inbox zero. Monday is 2x the inbox because most of the world has been sipping their Monday coffee longer than I and are fired up to send some mails. Determine whether to:

  • Respond now.
  • Flag to respond later – set a reminder, so the mail is no longer in the inbox.
  • Research why this unwanted mail is in my inbox and then act accordingly immediately (Spam filter, mail rules, mailing list unsubscribe).

Repeat the mail process for work mail. Capture this week’s relevant work notes in Field Notes. Smile at all the blank screens and take another sip of coffee. It’s 8:15am now.

Stand up and tidy the room now. Small piles of stuff have emerged over the past week, and these are devious piles. They claim they are important clutter, but they are just clutter. Disassemble the piles into their constituent parts and place them in the proper place. If a proper place does not exist, reflect on a strategy for developing a proper place, and either create the place or note proper place creation thoughts in Field Notes. Likely a weekend project.

Stare at the timepieces on the left side of the desk. Like the outfit, the watch complements my mental mood. Serious? Playful? Colorful? All business? They are called timepieces, not watches because they are objects I’ve taken care of in finding, they are a reflection of design I care about, and the act of selecting one is comfortably deliberate. Omega Speedmaster today. The moon watch. A classic.

A blue jay is on the deck railing now. I suspect he can smell and/or see the birdseed. He’s bouncing around. He can see me and doesn’t give a shit about my thoughtful outfit nor my classic timepiece.

8:30am. The hard part of Monday. Caching the entire week. Determine what data do I need at my fingertips to answer hard questions? Run bug queries, read presentations, scroll around in Slack channels, send clarifying DMs, and review agendas. The prior week calendar review has illuminated most of the critical questions, but the caching process creates more.

The blue jay is still bouncing around on the railing. Sunflower seeds are in his future.

The hard part of the weekly caching process is focus.

Caching and the resulting research fills the hour and drains the first cup of coffee. Before I stand up for cup two, review the day now and mentally note which meetings can occur outside via audio. During these meetings, I will walk the property, pull weeds, admire redwoods and oaks, and discuss important work topics of the day, but these meetings are mostly defined by being outside of the office. Two a day – minimum. Critical mental health investment.

Stand-up, walk to the kitchen and pour coffee number two. The dogs are sitting in the living room looking at me expectantly, and I tell them as I tell them most mornings, “I don’t feed you. Claire does.” They mistake the spoken word for a food commitment and get excited, but droop when I walk back to the office. Sorry, Marleau. Sorry, Gracie Lou.

9:45am now. I understand the week now. I am amply prepared for obvious hard questions, moderately prepared for curveball questions, and have queries to smart people where I don’t know answers. Glance again at the calendar: Is it stocked with useful and productive meetings? Yes? Good.1

Ok, cache the world now. Let’s start with the markets. Up a lot. Why? Optimism. Everyone desperately wants to return to normality. I am a professional optimist, but we are not returning to normal. Ever. This is a different forever situation, and the sooner we realize that and start to plan accordingly, the sooner we will feel unstuck.

Scan the news. Anyone talking about facts rather than feelings? Nope. Keeping scanning. BBC and NPR tend to be the highest signal. Ok, now Feedly. Writings of cherished friends. Treasured time. A section for entertainment because I miss movies. I skip the news section but spend a good amount of time on video game developments. Articles are never flagged for reading later – if they don’t make the cut this morning, they are gone forever.

That was about an hour. Glance again outside. It’s still sunny, but there is a thick blanket of fog hiding the valley. Across the way, the top of a small mountain pokes out the fog like an island. I’ve always wondered why this happens at this time of day. I suspect the growing heat of the nearby central valley is sucking marine air inland, but I have no facts save for the fact the blue jay is nowhere to be seen.

Getting close to my first meeting of the day. Delightfully, it’s outside. Go downstairs and walk to the sliding door on the side of the house facing the forest. My slippers are here on the floor from the last meeting of last week. This what I wear when I walk around the forest. Slippers.

My shoes are lonely.2


  1. Remain deeply worried that I don’t think we know how to do brainstorming or other crucial creative meetings in this new context. 
  2. Thanks to Ben Stewart for asking me about my WFH routine on the #ask-rands-anything channel on the Rands Leadership Slack. I started typing the answer when I discovered there was a blog post there. 
21 Jun 22:17

Surface Headphones 2 & Surface Pro X :: Matte Black

by Volker Weber

270ccf5e0ec1a1b1a4dbc5f54905a5de

I love this understated design. No flashy brand logos, no word marks, just four shiny black squares on an otherwise matte black surface. Both devices don't need to be marked Surface or Microsoft. They have an unmistakable design. The same holds true for the matte black Surface Laptop 3.

More >

21 Jun 06:48

COVID-19 Journal: Day 91

by george
I'm not sure if the first wave has broken yet or if we're now in a phase of local eddies or if the second wave is building or what, but, I feel like everything is going to OK because today I had the first sausage roll I've had in at least 91 days.I LOVE sausage rolls. Especially good ones. I also love sesame prawn toast. I have to eat both of these foodstuffs when they are proximal.Had a sunny
21 Jun 06:48

One Advantage of the App Store That’s Gone

The best part of the App Store, years ago, from this developer’s point of view, was that it was easy to charge money for an app. No need to set up a system — just choose the price, and Apple takes care of everything. So easy!

But these days, in almost all cases, you’d be ill-advised to charge up front for your app. You need a trial version and in-app purchasing (IAP) and maybe a subscription.

Here’s the thing: this is a massive pain in the ass to implement, test, and support — Apple does not make it easy. It could, I think, make certain common patterns basically turn-key (like trial versions + IAP), but it hasn’t.

This means that, for many developers, the very best thing about the App Store — the thing that actually helped their business — is gone.

And it’s not just gone — it’s probably actually more difficult doing this stuff via the App Store than doing the same things (trial, IAP, subscription) using non-Apple systems such as Stripe.

(And, as a bonus, Stripe isn’t going to review your app’s business model and tell you no.)

21 Jun 06:47

Virus Pastorale Before the Return of the Cars

by Gordon Price

Friday, June 9th – just up from the Rowing Club on Park Drive.  Mid-afternoon.

This was what it was like, and won’t be anymore.  That’s okay, everything is changing in these times.

No matter how it turns out, the Spring of the Virus will be remembered as a mix of bliss and dread.  Understandably there’s a desire to return to normality – but at what price and how much of the bliss?

Like this:

 

On the afternoon of June 19th, the cars return, blinkers flashing, as they start to mix with cyclists who variously occupy the asphalt from curb to curb. The parking lots are not yet open.

Here’s a video a moment in the transition: Park Drive at Lumberman’s Arch – June 19th

The signs are up:

Cars rarely drive at 15 km.  Nor do a lot of cyclists when they’re pacing themselves around Park Drive.  Both want to go faster.  Cars like driving at 30 K or more.  Cyclists like a comfortable speed from 15 to 20 K, and more when racing.

Will the expected speed for cars stay at 15, when it probably won’t be for cyclists?   Will the Park Board have to enforce a differential speed limit for users on either side of the barrier?

By the end of June, there will be a new sensibility on Park Drive as the bikes all move over into one lane and the vehicles another – each with less space than they’re used to.  Meanwhile, down on the seawall, the same questions arise: accessibility for whom, and how?

I hope we’ll still see moments like this – a short video of Park Drive at Lumberman’s Arch, as a diverse group of road users wheel by.  Diverse not in ethnicity but in the various ways they wheel.

Diversity

 

 

 

 

 

 

 


View attached file (59.6 MB, video/quicktime)
21 Jun 06:46

Jay-Z bought full-page adverts in newspapers across the US to honour George Floyd

by Jenny Brewer
mkalus shared this story from It's Nice That.

The black-and-white ad quotes Dr Martin Luther King Jr’s speech “How Long, Not Long” and is signed by the families of racial brutality victims and civil rights organisations.

21 Jun 06:46

The FitArt app combines an art show with a fitness regime for the ultimate artrobic exercise

by Jenny Brewer
mkalus shared this story from It's Nice That.

Launched by Swiss gallery Roehrs & Boetsch, the app features a series of 30-second routines each designed by a different artist.

21 Jun 06:45

June 2020 Papers

by Greg Wilson

I just downloaded 30 papers from ICSE 2020 that are (a) more interesting and useful than 90% of what’s in the undergrad SE textbooks I’ve read and (b) probably won’t make it onto most programmers’ radar for years, if ever. I’ve included titles, links and abstracts below, and have two requests and an observation:

  1. Please make an open access copy of your paper really (really) easy to find. A paywall is about as welcoming as a rasied middle finger: if I have to resort to SciHub, I’m going to assume you don’t really want me to read your paper. (I’ve left 7 good ones off this list for this reason.)

  2. Please make the DOI for your paper really, really easy to find, and put the abstract online as well. doi2bib is one of the most useful little things on the internet; if anyone ever builds doi2abstract, I will do my utter best to have them canonized.

  3. I will bet my entire stock of programming books that the average undergraduate in biology or geology knows more about current research questions and methods in their field than the average computer science undergraduate does about questions and methods in software engineering research. Until we close that gap, I think software engineering research will continue to chase practice rather than lead it, and will continue to be (mostly) ignored by the people it’s supposed to help.

There are lots of other good papers on the conference site; many lie outside my areas of interest and expertise, but are well worth reading.

Claes and Mäntylä: 20-MAD - 20 Years of Issues and Commits of Mozilla and Apache Development

Data of long-lived and high profile projects is valuable for research on successful software engineering in the wild. Having a dataset with different linked software repositories of such projects, enables deeper diving investigations. This paper presents 20-MAD, a dataset linking the commit and issue data of Mozilla and Apache projects. It includes over 20 years of information about 765 projects, 3.4M commits, 2.3M issues, and 17.3M issue comments, and its compressed size is over 6 GB. The data contains all the typical information about source code commits (e.g., lines added and removed, message and commit time) and issues (status, severity, votes, and summary). The issue comments have been pre-processed for natural language processing and sentiment analysis. This includes emoticons and valence and arousal scores. Linking code repository and issue tracker information, allows studying individuals in two types of repositories and provide more accurate time zone information for issue trackers as well. To our knowledge, this the largest linked dataset in size and in project lifetime that is not based on GitHub.

Dey et al: Detecting and Characterizing Bots that Commit Code

Background: Some developer activity traditionally performed manually, such as making code commits, opening, managing, or closing issues is increasingly subject to automation in many OSS projects. Specifically, such activity is often performed by tools that react to events or run at specific times. We refer to such automation tools as bots and, in many software mining scenarios related to developer productivity or code quality it is desirable to identify bots in order to separate their actions from actions of individuals.

Aim: Find an automated way of identifying bots and code committed by these bots, and to characterize the types of bots based on their activity patterns.

Method and Result: We propose BIMAN, a systematic approach to detect bots using author names, commit messages, files modified by the commit, and projects associated with the ommits. For our test data, the value for AUC-ROC was 0.9. We also characterized these bots based on the time patterns of their code commits and the types of files modified, and found that they primarily work with documentation files and web pages, and these files are most prevalent in HTML and JavaScript ecosystems. We have compiled a shareable dataset containing detailed information about 461 bots we found (all of whom have more than 1000 commits) and 13,762,430 commits they created.

Durieux et al: Empirical Study of Restarted and Flaky Builds on Travis CI

Continuous Integration (CI) is a development practice where developers frequently integrate code into a common codebase. After the code is integrated, the CI server runs a test suite and other tools to produce a set of reports (e.g., output of linters and tests). If the result of a CI test run is unexpected, developers have the option to manually restart the build, re-running the same test suite on the same code; this can reveal build flakiness, if the restarted build outcome differs from the original build. In this study, we analyze restarted builds, flaky builds, and their impact on the development workflow. We observe that developers restart at least 1.72% of builds, amounting to 56,522 restarted builds in our Travis CI dataset. We observe that more mature and more complex projects are more likely to include restarted builds. The restarted builds are mostly builds that are initially failing due to a test, network problem, or a Travis CI limitations such as execution timeout. Finally, we observe that restarted builds have a major impact on development workflow. Indeed, in 54.42% of the restarted builds, the developers analyze and restart a build within an hour of the initial failure. This suggests that developers wait for CI results, interrupting their workflow to address the issue. Restarted builds also slow down the merging of pull requests by a factor of three, bringing median merging time from 16h to 48h.

Fang et al: Need for Tweet: How Open Source Developers Talk About Their GitHub Work on Twitter

Social media, especially Twitter, has always been a part of the professional lives of software developers, with prior work reporting on a diversity of usage scenarios, including sharing information, staying current, and promoting one’s work. However, previous studies of Twitter use by software developers are generally restricted to surveys or small samples, and typically lack information about activities of the study subjects (and their outcomes) on other platforms. To enable such future research, in this paper we propose a computational approach to cross-linking users on Twitter and GitHub, the dominant platform for hosting open-source development, revealing 70,428 users active on both. As a preliminary analysis of this dataset, we report on a case study of 800 tweets by open-source developers about GitHub work, combining precise automatic characterization of tweet authors in terms of their relationship to the GitHub items linked in their tweets with a deep qualitative analysis of the tweet contents. We find that developers have very distinct behavioral patterns when including GitHub links in their tweets and these patterns are correlated with the relationship between the tweet author and the repository they link to. Based on this analysis, we hypothesize about what might explain such behavioral differences and what the implications of different tweeting patterns could be for the sustainability of GitHub projects.

Girardi et al: Recognizing Developers’ Emotions while Programming

Developers experience a wide range of emotions during programming tasks, which may have an impact on job performance. In this paper, we present an empirical study aimed at (i) investigating the link between emotion and progress, (ii) understanding the triggers for developers’ emotions and the strategies to deal with negative ones, (iii) identifying the minimal set of non-invasive biometric sensors for emotion recognition during programming task. Results confirm previous findings about the relation between emotions and perceived productivity. Furthermore, we show that developers’ emotions can be reliably recognized using only a wristband capturing the electrodermal activity and heart-related metrics.

Gold and Krinke: Ethical Mining - A Case Study on MSR Mining Challenges

Research in Mining Software Repositories (MSR) is research involving human subjects, as the repositories usually contain data about developers’ interactions with the repositories. Therefore, any research in the area needs to consider the ethics implications of the intended activity before starting. This paper presents a discussion of the ethics implications of MSR research, using the mining challenges from the years 2010 to 2019 as a case study. It highlights problems that one may encounter in creating such datasets, and discusses ethics challenges that may be encountered when using existing datasets. An analysis of 102 accepted papers to the Mining Challenge Track suggests that none had an explicit discussion of ethics considerations. Whilst this does not necessarily mean ethics were not considered, the sparsity of discussion leads us to suggest that the MSR community should at least increase awareness by openly discussing ethicas considerations.

Han et al: What do Programmers Discuss about Deep Learning Frameworks

Deep learning has gained tremendous traction from the developer and researcher communities. It plays an increasingly significant role in a number of application domains. Deep learning frameworks are proposed to help developers and researchers easily leverage deep learning technologies, and they attract a great number of discussions on popular platforms, i.e., Stack Overflow and GitHub. To understand and compare the insights from these two platforms, we mine the topics of interests from these two platforms. Specifically, we apply Latent Dirichlet Allocation (LDA) topic modeling techniques to derive the discussion topics related to three popular deep learning frameworks, namely, Tensorflow, PyTorch and Theano. Within each platform, we compare the topics across the three deep learning frameworks. Moreover, we make a comparison of topics between the two platforms. Our observations include 1) a wide range of topics that are discussed about the three deep learning frameworks on both platforms, and the most popular workflow stages are Model Training and Preliminary Preparation. 2) the topic distributions at the workflow level and topic category level on Tensorflow and PyTorch are always similar while the topic distribution pattern on Theano is quite different. In addition, the topic trends at the workflow level and topic category level of the three deep learning frameworks are quite different. 3) the topics at the workflow level show different trends across the two platforms. e.g., the trend of the Preliminary Preparation stage topic on Stack Overflow comes to be relatively stable after 2016, while the trend of it on GitHub shows a stronger upward trend after 2016. Besides, the Model Training stage topic still achieves the highest impact scores across two platforms. Based on the findings, we also discuss implications for practitioners and researchers.

Hilderbrand et al: Engineering Gender-Inclusivity into Software: Tales from the Trenches

Although the need for gender-inclusivity in software itself is gaining attention among both SE researchers and SE practitioners, and methods have been published to help, little has been reported on how to make such methods work in real-world settings. For example, how do busy software practitioners use such methods in low-cost ways? How do they endeavor to maximize benefits from using them? How do they avoid the controversies that can arise in talking about gender? To find out how teams were handling these and similar questions, we turned to 10 real-world software teams. We present these teams experiences “in the trenches,” in the form of 12 practices and 3 potential pitfalls, so as to provide their insights to other real-world software teams trying to engineer gender-inclusivity into their software products.

Ingram and Drachen: How Software Practitioners Use Informal Local Meetups to Share Software Engineering Knowledge

Informal technology “meetups” have become an important aspect of the software development community, engaging many thousands of practitioners on a regular basis. However, although local technology meetups are well-attended by developers, little is known about their motivations for participating, the type or usefulness of information that they acquire, and how local meetups might differ from and complement other available communication channels for software engineering information. We interviewed the leaders of technology-oriented Meetup groups, and collected quantitative information via a survey distributed to participants in technology-oriented groups. Our findings suggest that participants in these groups are primarily experienced software practitioners, who use Meetup for staying abreast of new developments, building local networks and achieving transfer of rich tacit knowledge with peers to improve their practice. We also suggest that face to face meetings are useful forums for exchanging tacit knowledge and contextual information needed for software engineering practice.

Johnson et al: Causal Testing: Understanding Defects’ Root Causes

Understanding the root cause of a defect is critical to isolating and repairing buggy behavior. We present Causal Testing, a new method of root-cause analysis that relies on the theory of counterfactual causality to identify a set of executions that likely hold key causal information necessary to understand and repair buggy behavior. Using the Defects4J benchmark, we find that Causal Testing could be applied to 71% of real-world defects, and for 77% of those, it can help developers identify the root cause of the defect. A controlled experiment with 37 developers shows that Causal Testing improves participants’ ability to identify the cause of the defect from 80% of the time with standard testing tools to 86% of the time with Causal Testing. The participants report that Causal Testing provides useful information they cannot get using tools such as JUnit. Holmes, our prototype, open-source Eclipse plugin implementation of Causal Testing, is available at this http URL.

Karampatsis et al: Big Code != Big Vocabulary: Open-Vocabulary Models for Source Code

Statistical language modeling techniques have successfully been applied to large source code corpora, yielding a variety of new software development tools, such as tools for code suggestion, improving readability, and API migration. A major issue with these techniques is that code introduces new vocabulary at a far higher rate than natural language, as new identifier names proliferate. Both large vocabularies and out-of-vocabulary issues severely affect Neural Language Models (NLMs) of source code, degrading their performance and rendering them unable to scale.

In this paper, we address this issue by: 1) studying how various modelling choices impact the resulting vocabulary on a large-scale corpus of 13,362 projects; 2) presenting an open vocabulary source code NLM that can scale to such a corpus, 100 times larger than in previous work; and 3) showing that such models outperform the state of the art on three distinct code corpora (Java, C, Python). To our knowledge, these are the largest NLMs for code that have been reported.

All datasets, code, and trained models used in this work are publicly available.

Kirschner et al: Debugging Inputs

When a program fails to process an input, it need not be the program code that is at fault. It can also be that the input data is faulty, for instance as result of data corruption. To get the data processed, one then has to debug the input data—that is,

  1. identify which parts of the input data prevent processing, and
  2. recover as much of the (valuable) input data as possible.

In this paper, we present a general-purpose algorithm called ddmax that addresses these problems automatically. Through experiments, ddmax maximizes the subset of the input that can still be processed by the program, thus recovering and repairing as much data as possible; the difference between the original failing input and the “maximized” passing input includes all input fragments that could not be processed. To the best of our knowledge, ddmax is the first approach that fixes faults in the input data without requiring program analysis. In our evaluation, ddmax repaired about 69% of input files and recovered about 78% of data within one minute per input.

Krueger et al: Neurological Divide: An fMRI Study of Prose and Code Writing

Software engineering involves writing new code or editing existing code. Recent efforts have investigated the neural processes associated with reading and comprehending code—however, we lack a thorough understanding of the human cognitive processes underlying code writing. While prose reading and writing have been studied thoroughly, that same scrutiny has not been applied to code writing. In this paper, we leverage functional brain imaging to investigate neural representations of code writing in comparison to prose writing. We present the first human study in which participants wrote and edited code and prose while undergoing a functional magnetic resonance imaging (fMRI) brain scan, making use of a full-sized fMRI-safe QWERTY keyboard.

We find that code writing and prose writing are significantly dissimilar neural tasks. While prose writing entails significant left hemisphere activity associated with language, code writing involves more activations of the right hemisphere, including regions associated with attention control, working memory, planning and spatial cognition. These findings are unlike existing work in which code and prose comprehension were studied. By contrast, we present the first evidence suggesting that code and prose \emph{writing} are quite dissimilar at the neural level.

Louis et al: Where Should I Comment My Code? A Dataset and Model for Predicting Locations that Need Comments

Programmers should write code comments, but not on every line of code. We have created a machine learning model that suggests locations where a programmer should write a code comment. We trained it on existing commented code to learn locations that are chosen by developers. Once trained, the model can predict locations in new code. Our models achieved precision of 74% and recall of 13% in identifying comment-worthy locations. This first success opens the door to future work, both in the new where-to-comment problem and in guiding comment generation. Our code and data is available at http://groups.inf.ed.ac.uk/cup/comment-locator/.

Overney et al: How to Not Get Rich: An Empirical Study of Donations in Open Source

Open source is ubiquitous and critical infrastructure, yet funding and sustaining it is challenging. While there are many different funding models for open-source donations and concerted efforts through foundations, donation platforms like Paypal, Patreon, or OpenCollective are popular and low-bar forms to raise funds for open-source development, for which GitHub recently even built explicit support. With a mixed-method study, we explore the emerging and largely unexplored phenomenon of donations in open source: We quantify how commonly open-source projects ask for donations, statistically model characteristics of projects that ask for and receive donations, analyze for what the requested funds are needed and used, and assess whether the received donations achieve the intended outcomes. We find 25,885 projects asking for donations on GitHub, often to support engineering activities; however, we also find no clear evidence that donations influence the activity level of a project. In fact, we find that donations are used in a multitude of ways, raising new research questions about effective funding.

Rahman et al: Gang of Eight: A Defect Taxonomy for Infrastructure as Code Scripts

Defects in infrastructure as code (IaC) scripts can have serious consequences, for example, creating large-scale system outages. A taxonomy of IaC defects can be useful for understanding the nature of defects, and identifying activities needed to fix and prevent defects in IaC scripts. The goal of this paper is to help practitioners improve the quality of infrastructure as code (IaC) scripts by developing a defect taxonomy for IaC scripts through qualitative analysis. We develop a taxonomy of IaC defects by applying qualitative analysis on 1,448 defect-related commits collected from open source software (OSS) repositories of the Openstack organization. We conduct a survey with 66 practitioners to assess if they agree with the identified defect categories included in our taxonomy. We quantify the frequency of identified defect categories by analyzing 80,425 commits collected from 291 OSS repositories spanning across 2005 to 2019.

Our defect taxonomy for IaC consists of eight categories, including a category specific to IaC called idempotency (i.e., defects that lead to incorrect system provisioning when the same IaC script is executed multiple times). We observe the surveyed 66 practitioners to agree most with idempotency. The most frequent defect category is configuration data i.e., providing erroneous configuration data in IaC scripts. Our taxonomy and the quantified frequency of the defect categories can help practitioners to improve IaC script quality by prioritizing verification and validation efforts.

Song et al: Using Peer Code Review as an Educational Tool

Code-review, the systematic examination of source code, is widely used in industry, but seldom used in courses. We designed and implemented a rubric-driven online peer code-review system (PCR) that we have deployed for two semesters, during which 228 students performed over 1003 code reviews. PCR is designed to meet four goals: (1) Provide timely feedback to students on their submissions, (2) Teach students the art of code review, (3) Allow custom feedback on submissions even in massive online classes, and (4) Allow students to learn from each other. We report on using PCR, in particular, the accuracy of student-based reviews, the surprising number of free-form comments made by students, the variability of staff-based reviews, how student engagement impacts the accuracy, the additional workload, and anecdotal perspectives of students. We describe some critical design considerations for PCR including rubric design, the importance of PCR training on each assignment to acclimate students to the rubric, and how we match student reviewers to student submissions.

Wang et al: An Empirical Study on Regular Expression Bugs

Understanding the nature of regular expression (regex) issues is important to tackle practical issues developers face in regular expression usage. Knowledge about the nature and frequency of various types of regular expression issues, such as those related to performance, API misuse, and code smells, for example, can guide testing, inform documentation writers, and motivate refactoring efforts. However, beyond ReDoS (Regular expression Denial of Service), little is known about to what extent regular expression issues affect software development and how these issues are addressed in practice.

This paper presents a comprehensive empirical study of 350 merged regex-related pull requests (PRs) from Apache, Mozilla, Facebook, and Google GitHub repositories. Through classifying the root causes and manifestations of those bugs, we show that incorrect regular expression behavior is the dominant root cause of regular expression bugs (46.3%). The remaining root causes are incorrect API usage (9.3%) and other code issues that require regular expression changes in the fix (29.5%). By studying the code changes of regex-related pull requests, we observe that fixing regular expression bugs is nontrivial as it takes more time and more lines of code to fix them compared to the general pull requests. The results of this study contribute to a broader understanding of the practical problems faced by developers when using regular expressions.

Wang et al: Better Code, Better Sharing: On the Need of Analyzing Jupyter Notebooks

By bringing together code, text, and examples, Jupyter notebooks have become one of the most popular means to produce scientific results in a productive and reproducible way. As many of the notebook authors are experts in their scientific fields, but laymen with respect to software engineering, one may ask questions on the quality of notebooks and their code. In a preliminary study, we experimentally demonstrate that Jupyter notebooks are inundated with poor quality code, e.g., not respecting recommended coding practices, or containing unused variables and deprecated functions. Considering the education nature of Jupyter notebooks, these poor coding practices as well as the lacks of quality control might be propagated into the next generation of developers. Hence, we argue that there is a strong need to programmatically analyze Jupyter notebooks, calling on our community to pay more attention to the reliability of Jupyter notebooks.

Wurzel Gonçalves et al: Do Explicit Review Strategies Improve Code Review Performance?

Context: Code review is a fundamental, yet expensive part of software engineering. Therefore, research on understanding code review and its efficiency and performance is paramount.

Objective: We aim to test the effect of a guidance approach on review effectiveness and efficiency. This effect is expected to work by lowering the cognitive load of the task; thus, we analyze the mediation relationship as well.

Method: To investigate this effect, we employ an experimental design where professional developers have to perform three code reviews. We use three conditions: no guidance, a checklist, and a checklist-based review strategy. Furthermore, we measure the reviewers’ cognitive load.

Limitations: The main limitations of this study concern the specific cohort of participants, the mono-operation bias for the guidance conditions, and the generalizability to other changes and defects.

Zampetti et al: An Empirical Characterization of Bad Practices in Continuous Integration

Continuous Integration (CI) has been claimed to introduce several benefits in software development, including high software quality and reliability. However, recent work pointed out challenges, barriers and bad practices characterizing its adoption. This paper empirically investigates what are the bad practices experienced by developers applying CI. The investigation has been conducted by leveraging semi-structured interviews of 13 experts and mining more than 2,300 Stack Overflow posts. As a result, we compiled a catalog of 79 CI bad smells belonging to 7 categories related to different dimensions of a CI pipeline management and process. We have also investigated the perceived importance of the identified bad smells through a survey involving 26 professional developers, and discussed how the results of our study relate to existing knowledge about CI bad practices. Whilst some results, such as the poor usage of branches, confirm existing literature, the study also highlights uncovered bad practices, e.g., related to static analysis tools or the abuse of shell scripts, and contradict knowledge from existing literature, e.g., about avoiding nightly builds. We discuss the implications of our catalog of CI bad smells for (i) practitioners, e.g., favor specific, portable tools over hacking, and do not ignore nor hide build failures, (ii) educators, e.g., teach CI culture, not just technology, and teach CI by providing examples of what not to do, and (iii) researchers, e.g., developing support for failure analysis, as well as automated CI bad smell detectors.

Zhang et al: An Empirical Study on Program Failures of Deep Learning Jobs

Deep learning has made significant achievements in many application areas. To train and test models more efficiently, enterprise developers submit and run their deep learning programs on a shared, multi-tenant platform. However, some of the programs fail after a long execution time due to code/script defects, which reduces the development productivity and wastes expensive resources such as GPU, storage, and network I/O.

This paper presents the first comprehensive empirical study on program failures of deep learning jobs. 4960 real failures are collected from a deep learning platform in Microsoft. We manually examine their failure messages and classify them into 20 categories. In addition, we identify the common root causes and bug-fix solutions on a sample of 400 failures. To better understand the current testing and debugging practices for deep learning, we also conduct developer interviews. Our major findings include: (1) 48.0% of the failures occur in the interaction with the platform rather than in the execution of code logic, mostly due to the discrepancies between local and platform execution environments; (2) Deep learning specific failures (13.5%) are mainly caused by inappropriate model parameters/structures and framework API misunderstanding; (3) Current debugging practices are not efficient for fault localization in many cases, and developers need more deep learning specific tools. Based on our findings, we further suggest possible research topics and tooling support that could facilitate future deep learning development.

Zieris and Prechelt: Explaining Pair Programming Session Dynamics from Knowledge Gaps

Background: Despite a lot of research on the effectiveness of Pair Programming (PP), the question when it is useful or less useful remains unsettled.

Method: We analyze recordings of many industrial PP sessions with Grounded Theory Methodology and build on prior work that identified various phenomena related to within-session knowledge build-up and transfer. We validate our findings with practitioners.

Result: We identify two fundamentally different types of required knowledge and explain how different constellations of knowledge gaps in these two respects lead to different session dynamics. Gaps in project-specific systems knowledge are more hampering than gaps in general programming knowledge and are dealt with first and foremost in a PP session.

Conclusion: Partner constellations with complementary knowledge make PP a particularly effective practice. In PP sessions, differences in system understanding are more important than differences in general software development knowledge.

And a reminder

Every single one of the sources cited in the Christchurch killer’s manifesto had a store on Shopify. The company has refused to deplatform any of them.

21 Jun 06:44

Twitter Favorites: [GDNAToronto] Here’s a sight for sore eyes: Avenue Open Kitchen is re-opening soon! They will offer takeout to start, and have pl… https://t.co/1VqdtJZCkC

Garment District Neighbourhood Association @GDNAToronto
Here’s a sight for sore eyes: Avenue Open Kitchen is re-opening soon! They will offer takeout to start, and have pl… twitter.com/i/web/status/1…
21 Jun 06:42

The iOS App Store Brings Users Only Because It’s the Only Choice

One might argue that developers should love the App Store because it brings the users.

AppleInsider writes about the App Store, Hey app, and David Heinemeier Hansson:

Like any other product or service, Hey has to persuade people that they have a problem it can solve, and that it’s worth paying for. You can’t persuade people of anything, though, if they don’t know about it. And then if you do persuade them, you can’t profit without a way to get your product into their hands.

His first argument against the App Store on Apple’s cut got Hansson and Hey a lot more notice than it might have. But it’s the App Store that gets his product to people. It’s the App Store that means if he persuades people it’s worth it, they can instantly have it on their iOS device.

This is a misconception that many people have — they think the App Store brings some kind of exceptional distribution and marketing that developers wouldn’t have on their own.

It’s just not true. It lacks even a grain of truth.

Setting up distribution of an app is easy and cheap. I do it for NetNewsWire for Mac with no additional costs beyond what I already pay to host this blog. This was true in 2005 as much as now — distribution is not some exceptional value the App Store provides.

And then there’s marketing. Sure, being featured used to mean something to revenue, but it hasn’t meant that much beyond just ego points in years. To be on the App Store is to be lost within an enormous sea of floating junk. No matter how well you do at your app description and screenshots — even if you get some kind of feature — your app will not be found by many people.

Build it (and upload it to the App Store) and they will not come.

Instead, you have to do marketing on your own, on the web and on social media, outside of the App Store. Just like always. The App Store brings nothing to the table.

So while it’s true to say that all of an iOS app’s users come via the App Store, it’s only true because there’s no other option.

If I could distribute my iOS app outside of the App Store, I would. I’d switch in a heartbeat. Even though it’s free and money isn’t my issue. It would make my work as an app maker easier.

21 Jun 06:42

Juneteenth

by graydon
[CW: slavery, white supremacy, violence]

I had a long-form rambling blog post here about Juneteenth that I ultimately didn't like the tone or arrangement of. But I did like some of its content, so I'll try to reproduce that here in a significantly shorter, hopefully more-readable point-form post.



0. Preface



  • Today is an important commemoration day for the nominal end of slavery in America.

  • Reflecting on that as a white person, in today's new-found white interest in racism discourse, makes me think of a couple things I'm seeing in the writing of fellow whites that are a bit off, so I'm going to write about them here.

  • Two things are: institutions and history.

  • Not trying to derail from present emphasis on police brutality, will in fact buttress that; but also think the most-present aspects of that are well-covered elsewhere and the dimensions I want to discuss are important parts of what I see in Black discourse about racism that are being consciously or unconsciously neglected in white discourse about racism.



1. Institutions



  • Institutions exist. If you can take one point from this post, take that. Part of neoliberal political propaganda is to deny their existence in order to obscure the function of those that favour the ruling class and undermine those that favour the lower classes. If you ever see yourself saying "there are no institutions, only individuals" you are parroting this propaganda.

  • Example institutions: governments, policing, justice and legal systems; military systems; banks, corporations, markets and economic systems; schools and universities; hospitals, doctors, health authorities and psychiatric systems.

  • Institutions have individual actors within them but they are independent of their individuals; they have constitutive physical, legal, financial, and bureaucratic form that greatly outlives and exerts much greater collective power than any individual.

  • The term "systemic racism" is not a synonym for "ubiquitous individual racism" (which is a separate and real problem); it is a synonym for "institutional racism" and refers to racism that is embedded within the constitutive forms of the institution as much or more than it refers to any individuals associated with the institution. Focusing on indivuals is, again, repeating political propaganda that seeks to obscure the function of the institution.

  • Specifically: the policies and records held in an institution can be (and often are) racist without any ongoing human effort. They are documents (or database rows) with their own force, part of the institution itself. Changing an institution at that level requires changing the policies and records. When someone asks to see institutional change, and gives a list of policy-change demands, and is then met with a promise that there will be changes to individuals or staff, that is missing the demand. Policy changes are required.

  • Besides policy changes (which address current and future wrongs), changing records also matters in order to address past wrongs. Records record what happened in the past and if that was wrong, the records are wrong. When someone calls for purging criminal records of people wrongly prosecuted, or granting citizenship to people wrongly excluded from it, or transferring land or money back to people it was stolen from, they are talking about modifying institutional records to address past wrongs. This is sensible and again if it's met with only a promise to change individuals on staff, the point is being missed.



2. History


This entry was originally posted at https://graydon2.dreamwidth.org/280031.html. Please comment there using OpenID.
21 Jun 06:41

6 feet apart is more than you think...

by peter@rukavina.net (Peter Rukavina)

We hosted the monthly Pen Night in our back yard tonight, our first non-Zoom meeting since February.

I wanted to make sure we did it right, so I got out the measuring tape and ensured that there was 6 feet between each chair.

It turns out that 6 feet apart is a lot more apart than I thought; if I hadn’t measured, I likely would have placed the chairs 3 or 4 feet apart, in error.

Makes me realize the people in the grocery store are a lot closer than 6 feet apart a lot of the time.

21 Jun 06:41

Gas station in Germany, 1958. pic.twitter.com/Z0tZ8bIfK3

by moodvintage
mkalus shared this story from moodvintage on Twitter.

Gas station in Germany, 1958. pic.twitter.com/Z0tZ8bIfK3





1228 likes, 191 retweets