Shared posts

18 Oct 06:59

I Voted, You Should Too

Ballot drop-off is super easy in Washington.

I received my US election ballot yesterday, filled it out today and dropped it off at the nearest drop box. Voting up here in Washington State has been mail in (or drop off) for a long while now, and it's pretty awesome. No lines and I can sit down and take my time to fill it out.

There were a number of things to vote on, but I waited until last to fill in my choice for president, expecting to feel a rush of satisfaction. And there was a bit of that, but there was also a whole range of emotions I wasn't expecting including sadness and anger and frustration; a whole spectrum of other feelings. But the biggest emotion I felt was fear that this train-wreck of an administration would continue for another four years.

I know it's not as easy for everyone to vote as it is for us in Washington. But regardless, please vote if you are of age. And if you're still undecided, vote for someone who cares for all of America, vote for Biden.

18 Oct 01:00

Does open sourcing hurt mid-sized hardware makers?

by charlie

In a recent video, Seon talked about his struggle with open sourcing his products.

He’s in favor of open sourcing, obviously, and makes most of his software and hardware designs available as open source. From what I can tell from the licenses he uses, the code and designs are free to be used, modified, and sold, though with attribution and share alike.

But he’s concerned with the impact he’s seen that open sourcing his works have had on him.

For starters, he keeps finding folks who don’t read or understand or respect his license. I’ve seen him remind folks on Discord to respect the license when they use his work. I can’t image what happens when folks don’t tell him they’ve used his design and he stumbles upon them, perhaps even on AliExpress.

He also points out the work involved in creating documentation while being time starved. And I get it, part of the reason I don’t have my code and design out there is just that added effort in documentation and clean up for reusability (not that anyone is interested in what I make anyway, and my code is quite ugly and useless, haha).

Mittelmachers
Interestingly, in his discussion, he does make a distinction between tinkerers and big folks with a thriving ecosystem, such as Adafruit. Tinkerers are sharing and not looking to earn from that work, so it’s ok to share and share alike. And folks with a large ecosystem and brand can afford to give their designs away freely, as they have a steady business of also selling things to their audience.

Seon sees himself in the middle somewhere. He talks about making open sourcing decisions based on how it affect his business. And it is sometimes unclear to him the value of open sourcing his creations.

Dropping the BOM
This discussion came up when he was explaining why there were a few of his products he would not open source. A reason might be the effort to make something open source, as I said above, especially for something he’s not necessarily intending to sell. But, also, he brought up his concern around giving away what he is working hard to make money and survive on. He wants to give back to the community, but doesn’t want to make it easy for unscrupulous freeloaders to undercut him competitively in sales.

As an example, he talked about why he doesn’t release his BOM or component library. He mentioned how his Gerbers, schematics, and board files are fine to share, but his BOM is too much for him to give away. Yes, he knows that a lot of that info is in the files he shares. But the BOM is basically a shopping list of his components and suppliers, which he feels is a competitive advantage he’s built over years, and giving it away makes it too easy for folks to undercut him.

The business of making
One challenge of going from a tinkerer to one who builds a business on open source hardware is that transition from building a sales stream and being big enough, both in sales and brand, that knockoffs have minimal impact.*

Balancing open sourcing one’s designs and building a book of business has been around for a long time.

And that balance teeters wildly for those who are trying to meaningfully give the community while also trying to protect the business they depend on.

Not an easy balance.

What do you think? How does open sourcing help or hold back early stage hardware makers? How do you balance protecting your business and growing your brand and community?

*Of course, this presumes copycats have negligible impact on sales at Adafruit or Sparkfun or any of the other big maker brands. And I wonder if their path to growth has helped them balance between open and closed, and build resilience against those who might undercut them.

Update, 10 hours later….
Wow, I seemed to have hit a nerve. Very interesting discussion ensued after Seon picked up on my tweet. I think these kinds of discussions are helpful for us to understand the business of making, and the challenges middle manufacturers have.

Here is the tweet if you want to see the thread that ensued.

Image by Nick Fewings

The post Does open sourcing hurt mid-sized hardware makers? first appeared on Molecularist.
18 Oct 00:59

Google Coral Dev Board mini SBC

by Rui Carmo

I like both the form factor and the specs. $99 is a tad pricey when compared to my current favorite (the Raspberry Pi 3A+), but it has USB-C, 2GB RAM and 8GB eMMC flash (all of which I’d like to see on the next smaller Pi) besides a Google Edge TPU.


Want to show your appreciation?
18 Oct 00:59

RE-downloading Windows features updates

by Volker Weber
You may have noticed that after synchronizing updates released on patch Tuesday (October 13, 2020), Windows 10 Feature Updates for versions 1903 and 2004 that were previously downloaded, now show a status of Downloaded = No under the All Windows 10 Updates node. The content for these Windows 10 Feature Updates were revised to address this security issue. As a result of this content revision, any previously downloaded feature updates will need to be downloaded again.

So, if you read that Microsoft installed unwanted Office web apps, that is the reason. It's just the default setup.

More >

18 Oct 00:58

How Sean Penn Went to War Against COVID

by Volker Weber
Nobody exactly asked Sean Penn if he would help set up one of the country’s most impressive coronavirus testing programs. And yet, under the auspices of his nonprofit, CORE, that’s what he did at L.A.’s Dodger Stadium. Now, the Oscar winner who spent the year on the frontlines of the fight to fix the crisis is expanding his group’s efforts—and bracing for a possible winter uptick.

Imagine what could have been done nationwide with the resources of a rich country.

More >

18 Oct 00:57

The idea of white privilege is entirely correct. You become a more empathetic person and a stronger ally by recognising it. But it is a dreadful phrase. For most people, privilege refers to wealth, so using it instantly leads to this disagreement. twitter.com/BBCPolitics/st…

by Ian Dunt (IanDunt)
mkalus shared this story from iandunt on Twitter.

The idea of white privilege is entirely correct. You become a more empathetic person and a stronger ally by recognising it. But it is a dreadful phrase. For most people, privilege refers to wealth, so using it instantly leads to this disagreement. twitter.com/BBCPolitics/st…

“You have this term white privilege… tell that to some of the young boys and girls in my constituency”

Ipswich MP Tom Hunt says white working-class children have been left behind

#politicslive bbc.in/376HCzA pic.twitter.com/j8Qh0tbkB9




341 likes, 95 retweets



386 likes, 51 retweets
18 Oct 00:57

RT @DmitryOpines: London is behaving exactly as you'd expect if it was preparing to sell a compromise involving painful concessions as a ha…

by Dmitry Grozoubinski (DmitryOpines)
mkalus shared this story from mrjamesob on Twitter.

London is behaving exactly as you'd expect if it was preparing to sell a compromise involving painful concessions as a hard won triumph.

That doesn't guarantee that's the plan, but all the hysterics and drama would fit neatly into that script.

irishtimes.com/news/world/uk/…


Retweeted by James O'Brien (mrjamesob) on Saturday, October 17th, 2020 10:47am


869 likes, 280 retweets
17 Oct 05:29

✚ The Process 111 – Excel Limit

by Nathan Yau

A row limit in Microsoft Excel led to an undercount of Covid-19 cases in the thousands. The root of the problem goes past the software though. Read More

17 Oct 05:29

Five Years at Microsoft

by Rui Carmo

It’s hard to believe it’s only been five years. Things have been so overwhelmingly intense that it feels like a lot more, and after a particularly challenging day, I have to wonder if it’s ever gotten any easier.

The Dog Years

I originally joined Microsoft for the dubious (but ultimately quite rewarding) privilege of having a front-row seat on the Azure roller-coaster, and most of what I wanted to achieve (experiencing that transformation, learning the stack, and understanding the business) was pretty much complete by the second year.

And it’s actually strange to look back during these pandemic times and realize that I was already partially remote since the first year, that meetings already pervaded my agenda, and that it took me around thousand days to I start mulling what was missing in it for me.

By the end of the fourth year I had already realized I wasn’t just missing the opportunity to do deeper technical stuff but also the potential for growth and doing things much bigger than I could possibly ever do in Portugal, so yes, moving to Consulting Services across EMEA and “leaving” the (often frustrating and lobby-ridden) Portuguese market was certainly a good way to fix half of what I felt was wrong.

Right Now

As it happens, I’m writing this within a couple of weeks of Microsoft’s 30th anniversary here in Portugal.

Things have been hectic, so I was never quite able to pin down exactly when the transition took place, but all of a sudden I was… mostly OK, and stuff started happening so fast that I’m still feeling the thrall of impostor syndrome. But I think I’ve reached a somewhat stable point now:

  • I’m working in one of the most diverse corporate cultures imaginable (and I don’t just mean diversity in the usual US-centric sense of the word, I mean I work with people from disparate cultures and mindsets that most people (even in the startup world) seldom experience.
  • The scale of what we do is tremendous. Most people still don’t get “the new Microsoft”. They also don’t get Azure, because they look at what it is and not what it’s becoming.
  • As usual throughout my career in general, I’m smack middle in the very center of another of Microsoft’s biggest reorgs. And I’m pretty sure it’s going to be at least as transformational as Azure.
  • The vast majority of my current customers (or projects I’m involved with) are in the telco industry, which I’ve always loved (I would go back in a flash) and has renewed priority right now.
  • I’m considered (rightly or wrongly, not sure yet) to be one of the best at what I do (and I’m being quite ironical about this, since I got that accolade primarily because there are very few people with my kind of background and life story at Microsoft, and I suffer from semi-permanent impostor syndrome as a result).

So yes, there’s all of that going for me right now, and, overall, I’m counting myself lucky. I get to work 100% remote, place calls all over the world, meet interesting people… and strategize with them.

Today it was only one, but I made it count.

But it’s still not enough, and although the pandemic has considerably lowered my expectations, the engineer in me is still looking for… something better, more fulfilling and, above all, where I can keep learning.

Someplace where I can do what I love–building stuff that matters.


Want to show your appreciation?
17 Oct 05:28

Windows Terminal Tips and Tricks

by Kayla Cinnamon

Windows Terminal comes with a lot of features that allow you to customize it and interact with it in a way that feels most comfortable to you. Let’s run through some tips and tricks that could help you set up your terminal in a way that’s perfect for you. At the time of this blog post, Windows Terminal is on version 1.3 and Windows Terminal Preview is on version 1.4.

On first launch

When you first install Windows Terminal, you will be greeted with a Windows PowerShell prompt. Windows Terminal ships with Windows PowerShell, Command Prompt, and Azure Cloud Shell profiles by default.

In addition to these profiles, if you have any Windows Subsystem for Linux (WSL) distributions installed, the terminal will automatically create profiles for those distros as well. If you would like to install additional WSL distros on your machine, you can do so after installing terminal and on your next terminal launch, the profiles for those distros should automatically appear. These profiles will have Tux as their icon, however you can change the icon of the distro in your settings to match whichever distro you have. You can find more information about WSL on the WSL docs site.

👉 Note: You will have to download the distro icons as they do not come shipped inside the terminal.

Customization

Windows Terminal comes with a large set of default settings, including color schemes and keyboard shortcuts. If you’d like to view the default settings file, you can hold Alt and click on the Settings button inside the dropdown menu.

Global profile settings

Windows Terminal provides you the option of applying a setting to every profile without having to duplicate the setting for each profile entry. This can be done by adding a setting inside the "defaults" array inside the "profiles" object. A list of all possible profile settings can be found on the Profile settings page on our docs site.

"profiles":
    {
        "defaults":
        {
            // Put settings here that you want to apply to all profiles.
            "fontFace": "Cascadia Code"
        },
        "list":
        []
    }

Custom color schemes

Windows Terminal comes with an assortment of color schemes by default. However, there are unlimited possibilities when it comes to color schemes. A great place to find additional terminal schemes is terminalsplash.com.

If you’d like to create your own color scheme, terminal.sexy is a great tool for creating and visualizing your own color schemes.

💡 Tip: You can match your custom color scheme to a background image by using the PowerToys color picker to grab hex codes for colors to use inside your scheme. PowerToys can be installed using winget with winget install powertoys.

Image pt color picker

Custom command line

You can give your command line some style with Oh my Posh and Terminal-Icons. These tools allow you to customize what your prompt looks like with colors, glyphs, and emojis. To get Oh my Posh with Posh-Git and PSReadline up and running, you can follow this tutorial.

Oh my Posh recently came out with Oh my Posh 3, which has a lot more customization options and is not only exclusive to PowerShell. After going through the tutorial linked above, you can upgrade to V3 with the following command:

Update-Module -Name oh-my-posh -AllowPrerelease -Scope CurrentUser

Image powerline powershell

You can add a bit more flair to your shell by adding custom icons with Terminal-Icons. The image below is also using a custom Posh theme to add emojis to the prompt line.

👉 Note: You will need to set your font to a Nerd Font in order to get the Terminal Icons to appear.

Image terminal icons

Old school fonts

For those of you who are fans of the retro terminal effect, a great place to find old school fonts is at https://int10h.org/oldschool-pc-fonts/.

Image terminal retro font

Places for background images

Desktop wallpapers often look great inside Windows Terminal as background images. Some great places to find background images are Windows Themes as well as WallpaperHub. Windows Terminal supports both images and gifs for background images.

Another popular option is placing the logo of the shell you are using as the background image, to help identify which shell you are in. You can set where the background image is placed with "backgroundImageAlignment" in your settings. More information on background image settings can be found on our docs site.

Functionality

wt.exe command line arguments

You can launch your terminal in a specific configuration by using the wt.exe command. Some options you can set are tab and pane arrangements as well as their starting directories and profiles. You can save a custom command as a shortcut and pin it to your taskbar to open your desired configuration.

If you use the wt.exe command inside the command palette, it will take effect in your current terminal window, rather than spinning up a new terminal instance.

This example will open the PowerShell profile in the current working directory in a new tab with a vertical pane running your default profile.

wt -p "PowerShell" -d . ; split-pane -V

Full documentation about wt command line arguments can be found on our docs site.

Panes

Windows Terminal has pane support for profiles. You can open a new pane of a profile by either holding Alt and clicking on the profile in the dropdown, or by using the following keyboard shortcuts:

Automatic pane split of current profile

Alt+Shift+D

Horizontal pane split of default profile

Alt+Shift+Minus

Vertical pane split of default profile

Alt+Shift+Plus

You can also move focus around your panes by holding Alt and using the arrow keys. Lastly, you can resize your panes by holding Alt+Shift and using the arrow keys. You can find more information about panes on our docs site.

Image terminal panes

Copy and paste

In Windows Terminal, the default keyboard shortcuts for copy and paste are Ctrl+C and Ctrl+V, respectively. If you do not have a selection, Ctrl+C will act as normal as the break command.

You can customize which keys you’d like to copy and paste with by editing the "copy" and "paste" key bindings. If you delete these key bindings from your settings.json, the terminal will default to Ctrl+Shift+C and Ctrl+Shift+V. This can be especially helpful for WSL users who need Ctrl+C and Ctrl+V free to use for their shells.

You can also choose which formatting is copied to your clipboard along with newline characters with the "copyFormatting" and "singleLine" actions associated with the copy command. Full documentation on clipboard integration commands can be found on our docs site.

Defining key bindings and actions

A large portion of the customizable properties inside Windows Terminal come from key bindings and actions. Commands inside the "actions" array will automatically be added to your command palette. If you’d like to also use them with key bindings, you can add "keys" to them to invoke them with the keyboard. A full list of every possible command can be found on the Actions page of our docs site.

Send input commands

Windows Terminal gives you the ability to send input to your shell with a key binding. This can be done with the following structure inside your "actions" array.

{ "command": {"action": "sendInput", "input": ""}, "keys": "" }

Sending input to the shell with a keyboard shortcut can be useful for commands you run often. One example would be clearing your screen:

{ "command": {"action": "sendInput", "input": "clear\r"}, "keys": "alt+k" }

Navigating to the parent directory with a key binding may also be helpful.

{ "command": {"action": "sendInput", "input": "cd ..\r"}, "keys": "ctrl+up" }

You can also use this functionality to run builds or test scripts.

WSL starting directory

As of now, Windows Terminal defaults to setting WSL profiles’ starting directory as the Windows user profile folder. To set your WSL profile to start in the ~ folder, you can add the following line to your profile settings, replacing DISTRONAME and USERNAME with the appropriate fields.

"startingDirectory": "//wsl$/DISTRONAME/home/USERNAME"

Cheers

We hope you learned a few tips and tricks about Windows Terminal throughout this blog post. We are currently planning another release for the month of November, so keep your eyes peeled for that one! If you have any questions or feedback, feel free to reach out to Kayla (@cinnamon_msft) on Twitter. For more information about Windows Terminal, you can check out our docs site.

Image 08 2020 signatures

The post Windows Terminal Tips and Tricks appeared first on Windows Command Line.

17 Oct 05:26

Good Intentions, Bad Inventions: The Four Myths of Healthy Tech

Amanda Lenhart, Kellie Owens, Data & Society, Oct 16, 2020

Rather that restate the four myths, which doesn't really tell us anything, I'll restate the four counterpoints (quoted):

  1. The concept of addiction does not encompass the full range of pleasures, risks, and uses that people create with technology;
  2. Some technology cannot be fixed by more design, and some technology should not be  built at all;
  3. Many of the most important parts of digital well-being cannot be captured by quantitative metrics; and
  4. Health and well-being cannot be reduced to the single variable of screen time.

I think a good case could be made for all four of these points.

Web: [Direct Link] [This Post]
17 Oct 05:26

Trickster

This is a recommendation for 2½ books and a just-launched TV series, and for the books’ author, Eden Robinson. As a consequence of watching the TV pilot I’m now re-reading the books, which is strong testimony. While this is pretty Canadian stuff, I think the story of a disadvantaged and hard-pressed young aboriginal person, lost in strange spaces, would resonate in plenty of other landscapes. Anyhow, it’s dark and entertaining, with sex and drugs and rock ’n’ roll and supernatural creatures you would not want to meet on a dark night. These are page-turners, keep-you-up-too-late stuff.

Son of a Trickster Trickster Drift

The books are Son of a Trickster (2017), Trickster Drift (2018), and the “½” is because the third book, The Trickster Returns, isn’t out till next spring. But boy will I ever snap it up.

The TV series is on CBC (Canadian public TV); here’s a pointer which I’m not 100% sure will work outside Canada. If it does, the shows are free but there are ads. I thought S1E1 captured the spirit of the books perfectly and I’ll be spudding out. Sorry, it’s real actual TV so you’d have to wait a few months to binge it.

Our hero is Jared, a Native high-schooler whose family is deeply, spectacularly dysfunctional.

Drugs, violence, you name it, all the usual marks are checked. The locus of chaos at Jared’s place is his mother, who’s all about trashy men and bad drugs, with a casual attitude toward rent, utility bills, and education. Granted the difficulties she’s sort of lovable, and definitely to be feared not pitied. Jared’s Dad (not situated locally) is a wastrel of a different flavor.

The thing about Jared is he takes care of people, whether or not they deserve it; it seems he can’t help it, it’s just who he is.

Which means Jared has a high-stress life. It helps that he’s bright and well-organized and actually makes enough money to keep the family afloat. (Not legally, but still.) He’s got his own substance-abuse issues and suffers from typically-toxic high-school culture.

Now mix in the Trickster of the title, not a natural creature at all, except sometimes a raven; there are strong roots here in our Pacific-Northwest Native story culture. Trickster has a name: Wee’git.

The supernatural bulges through the surface of reality in horrifyingly believable ways; its inhabitants are not cuddly and not friendly and are apt to bite off pieces of human anatomy, given a chance. The first book is set in a mid-coastal town and the second mostly in Vancouver, my hometown, which I disclose because it may have contributed to my being completely taken with these stories.

It seems very unlikely that the TV series will achieve the full dark craziness of the books, but early indications are positive. The actors playing Jared his his mother are razor-sharp.

Hmm, I seem to have got way down here without talking about the underlying social issues. Canada has systemic racism just like everywhere else. In particular there’s no getting away from the fact that Canada is substantially built on stolen land. The abuse of our indigenous people has been explicit and multigenerational and brutal. So, unsurprisingly, a lot of them are really hurting. These stories inhabit that reality and show you things you can’t unsee, with awesome clarity but without ever giving you a feeling that you’re being lectured. A lot more Canadians would benefit from learning the truths on display on these pages.

Oh, and a word about the author. She has a couple of other books which I plan to read, and when you hear her on the radio or YouTube or whatever, she’s a scream, with a big laugh and an endless line of good-humored stories about Native life and writer’s issues and, well, everything. I think she’d be a delight to hang out with

[Disclosure: The buy-this-book links above are Amazon Associate tagged and if you follow them I might make a buck.]

17 Oct 05:22

Downsizing Community Teams

by Richard Millington

It’s painful to lose members of your community team.

But don’t compound the pain by trying to do the same things with fewer people.

You will burn yourself out and deliver a worse experience.

If the community team is cut, you need to cut some of the activities you’re doing too.

That might be content, running the MVP program, hosting events and activities, or something (please don’t cut moderation). Engage members in the process of deciding what to cut if you like.

Remember, it’s always better to do fewer things extremely well than many things badly.

17 Oct 05:22

The future of community: a future for communities

For the last year or so, but especially since this summer, I’ve found myself almost consumed by thinking about communities – how the best ones are created, and how they can scale. But I also started to realize the possibility space for interesting communities to have an impact on the world is much larger than I expected. Conversely, I think most of the money and effort being directed toward improving the status quo of community building is misdirected by a myopic view of where that impact will be, and what problems we need to solve together for that impact to be fully realized.

The possibilities

My conviction in the importance of intentional community building rests on two assumptions:

  1. It’s becoming radically easier to create and scale new communities
  2. People are increasingly turning to intentionally nurtured communities for personal and professional growth

There are only two factors that limit the potential for interesting communities to arise: the availability of people searching for new communities to join, and the ability for those people to gather and do interesting things together. Both of these things are on the rise.

First, there are more ways than ever for people in search of belonging to do things together, and the trend is accelerating. We are seeing new communication services designed for people to interact with each other, rather than just the void of social media. From new services like Rume, Gather.Town, Icebreaker and Among Us (which I’d put in this bucket) to old guards like Discord and Zoom who are continuing to innovate, there’s a renewed focus on helping people connect with, not just update, each other. Atop these tools, there are also lots of initiatives, companies, and accelerators trying to help more communities bootstrap. Commsor and People & Company have emerged on my personal radar to help new community builders navigate the cornucopia of tools and platforms, and I’m sure there are more. It’s becoming radically easier to create and scale new communities.

Second, people are choosing to spend more of their time in semi-private, intentionally grown places and groups than in the ruthlessly public Web or in complete isolation. As I’ve noted before, as the public Web and social media become too crowded, the entertainers and creators who can grow an audience are building their careers in public, but the rest of us are collecting in smaller, more private spaces like group chats and personal servers. We’re starting to see that the potential of the Internet to connect everyone, everywhere, is a double-edged sword, and we reap the most benefit for our personal lives when we wield it intentionally to nurture smaller communities rather than opine into the void. People, especially young people who notice this trend most acutely, are increasingly turning to smaller, intentionally brewed communities and away from the public social media. Community groups like this are going to make big strides into how we all grow personally and professionally and find belonging in the next decade.

Community is a burgeoning space, with lots of energy and enthusiasm and a growing need. But I don’t think we’re ready yet. The matches are primed for ignition, but in search of a spark.

Before the revolution

What do we still need in order to spark the community space to really see the profound impacts it can have on the world? Here, again, I think we’re still missing two critical components.

  1. The tribal knowledge on how to bootstrap and scale powerful lasting communities is not evenly distributed – there are few experts, and everyone else is left to make the same mistakes over and over.
  2. The tools at the disposal of most community builders are crude, disconnected, and sometimes even disincentivized against community growth.

The space of community builders and founders feels strikingly reminiscent of the broader startup ecosystem before widespread startup accelerators and incubators like Y Combinator. At the time, the “best practices” for how to found a startup weren’t evenly distributed to all to-be-founders. Ideas like product-market fit, minimal viable products, and how to build a pitch deck weren’t readily accessible to a lot of builders who would have been great founders, so founders spent a lot of redundant energy making the same mistakes and running into the same failures on their own. This bottlenecked the number of great companies being founded.

In the early 2010’s, a combination of increase in capital going into early stage venture, combined with the prevalence of early stage funding and success stories, meant there was much more support for early stage founders. More importantly, best practices and knowledge about common mistakes to avoid as a founder became accessible. There was a startup canon – every founder eventually hears “do things that don’t scale.”

It’s easy to underestimate the profound impact that this democratization of know-how had on the vitality and success of the startup ecosystem. But it made a difference. Accelerators, founders, and investors sharing success stories and failure lessons all helped change this.

“Community” today is in the “before” half of this inflection point. There are master community builders who know how to use the tools we have to build great, lasting communities to scale, and many other community leaders are making the right mistakes and learning through them. But there’s no “community gospel,” no canon for community founders. The best practices for community founders are not distributed evenly.

The critical thing about the inflection point in the startup world is that knowledge spread became superlinear. When we depend on former founders and builders to spread know-how in their course of their work, they can only bring it to new companies and initiatives when they begin something new, or work with new teams. But when we take those experts and build out accelerators and blogs and videos and freely shared materials for the rest of the industry, that knowledge can be spread and shared much more widely, much more cheaply.

In the community world, especially this early, most experts are busy building their own communities and scaling them out. But I think there’s an open space for someone to bring the expertise and war stories of those experienced community builders and share them out for the rest of the world founding their own communities today. We’re seeing the beginnings of this trend, but as long as there are community builders re-discovering the same lessons on their own, there’s much more work to be done.

As we begin the work of distributing this knowledge and experience more evenly, I suspect we’ll run into the next bottleneck in community growth: there’s a startling lack of imagination and focus in the tools community founders can use to build their microcosms. Let’s try to understand what we’ve been missing.

Going beyond platforms

When I tell folks I’m interested in building better tools for community builders, almost without exception, I get the same follow-up question: “What’s wrong with Slack?” “Do you really want to compete with Discord?” The ones who don’t are usually the ones who have tried to scale up communities, and run into the same problems over and over again.

The work of building a great community goes much farther beyond simply moderating a Slack group. A great community needs regular events and meetups. They need ways for leaders to step up and invent their own traditions, create their own events, and run their own sub-groups. Leading a community requires making it easy for members to become advocates. It means helping members connect with other members. The work of a community builder mostly happens outside of a single communication platform, and often between platforms. And in this space, the tools at the disposal of most community builders are crude, disconnected, and sometimes even disincentivized against community growth. The best community-building tools should know about best practices for how to grow great communities, and then provide one-click access to adopt those best practices into any community. It should allow communities to mix in their own personality into the tools they use, without imposing the way the community should be structured.

As a trivial example, imagine you’re organizing a new writing workshop for a community of indie authors. You’ve gotten a great headliner author to come and speak, and you know exactly what kinds of members from your community are interested in the author. Here’s a non-exhaustive list of tasks a good community manager would need to do:

  • On a community Google Calendar, schedule the event with the right-sized description and a link to RSVP or attend the event.
  • In the weekly update being sent to the community’s active members, queue up a short blurb about this event.
  • Schedule out messages on Slack, Discord, or other social media platforms in ways that are catered to each platform.
  • Set up an effective Q&A system for the event, taking into account the experience and knowledge they have from their past experience running similar events, to make sure good questions can bubble to the top, and attendees can feel engaged.
  • Remember to take photos or recordings of the event, so members who didn’t attend can check back later, and those who did can use those artifacts to advocate for how awesome the community is to those who aren’t a part of it yet.

What makes this more complicated is that an experienced leader can hand off this list to a new community manager, but the details matter, and the details can’t simply be dictated in a list of instructions.

I think there’s a wide-open space for us to build much, much better tools for community leaders. A community leader’s most important job is to be the model community member and contributor – all other concerns should fade into the background of their work, and we are far from it today. Community tools should have the best practices and know-hows of good community management baked into its design, and it should work across platforms and across the online-offline barrier, because great communities flourish across platforms and the online-offline barrier.

Community tools need to grow beyond platforms. This is obvious when you talk to community leaders and listen to how they work, but most people who are building community tools seem too focused on owning the communities that grow from their tools, not helping them flourish out in the world. I think we can do much better here to support communities and their leaders.

The possibility space of community

We used to be limited in the communities we can belong to, by the facts of geography. When I was helping bootstrap a startup for poets in early 2017, bemoaning the lack of poets in the suburbs of the American Midwest, I caught a flight to Boston with the team to find poets we could speak to.

Because of changes in communication tools and culture in the last few years, we’re not limited by geography today. Our limits are now self-imposed, in our ability to bootstrap and grow great communities. And if we can work through those self-imposed barriers, I think the future of communities is bright. The possibility space for great, life-altering, lasting communities is wide open.

I’m often struck by the audacity required of community leaders to ask people to be a part of their community. We, humans, aren’t infinitely scalable. We have a small and finite amount of space in our lives for the people who matter. By asking someone to join our communities, we’re asking them to find in that small space some margin to spare for us, and in turn, we hopefully promise them more than just “value” – hopefully, we promise them belonging, kindness, and love.

This is the work of community builders, and I think they deserve better tools, better support, and better stories from which to draw their own efforts. I’m excited to see what we build for these leaders in the next decade, and perhaps join to build something of my own, too.

17 Oct 05:21

We’re not as divided as we think we are

by Michael Mace

The United States is not as polarized and angry as you think it is. The shrill voices that dominate discussions online and in the press are actually a very small percent of the US population. Unfortunately, social media and the press accidentally amplify the voices of the angriest people, giving us a twisted view of society. It’s like we’re all trapped in a funhouse, staring at the mirrors 24/7, believing they’re real.

The way out of the funhouse is to spend more time listening to average people, the ones who aren’t trending on Twitter. If you do, you’ll find that most of your neighbors are far more reasonable and willing to compromise than you expected. We don’t all agree on everything, but there’s a lot we can do to work through our differences. To help make that happen, some of my coworkers and I are setting up a new website, called the Human Empathy Project (link). We’ll bring you video of regular people talking about the country’s problems. The stories you hear may surprise you, and give you some cause for hope.

Here are the details on what we’re doing, and why…


My employer, UserTesting, helps companies get super-fast public feedback on just about anything: websites, apps, marketing messages, etc (link). You specify what you want to test, and within a few hours you get video of your customers reacting to it. UserTesting processes more than 130,000 of these feedback videos every month.

This year we’ve experimented with using that system to get feedback from average people on hot-button national issues, starting with the pandemic and branching out to include the election and racial justice.

We learned two important things:

•  First, it turns out regular people are very willing to speak candidly in a video test. Most of them welcome the opportunity to be heard.

•  Second, the stories we heard from regular people were strikingly different from the narrative we all see on social media and in the press. The people we heard from were far less polarized, more thoughtful, and more willing to compromise than the voices that dominate the national debate. It was like we’d slipped into a parallel universe in which Americans were still willing to listen to one another and work together to help the country. (That doesn’t apply to everyone, of course, but the really polarized people are a small percentage, at most about one person in ten.)

This was a puzzle. The UserTesting platform is designed to gather feedback from average people chosen randomly, so we couldn’t understand why we were getting results so different from the things we see online. We investigated further, and it turns out there was nothing wrong with our methodology — we were hearing from the true mainstream of the country, one that is vastly under-represented online and in the press.

We also found that we’re not the first people to notice this disconnect. For example, a nonpartisan research organization called More in Common did a very extensive survey of Americans in 2019 and concluded:

“Today, millions of Americans are going about their lives with absurdly inaccurate perceptions of each other. Partisan media consistently elevates the most extreme representations of ‘them’…. This creates a false impression that outliers are somehow representative of the majority….Despite America’s profound polarization, the middle is far larger than conventional wisdom suggests, and the strident wings of progressivism and conservatism are far smaller…. Yet both sides have absorbed a caricature of the other.”

--Hidden Tribes report by More in Common, 2019 (link).


How did we get so out of touch with each other?

There’s a toxic feedback loop between social media and the news media. It works like this:

Behavior in online forums is driven by something called the 1% rule: The vast majority of the content posted to a forum is created by about 1% of its users (link). That’s just a basic fact of human behavior, and everyone who runs an online forum knows about it.

What many people don’t understand is that those 1% are not a representative sample of the rest of the forum’s membership. Something unusual has made them far more motivated than the average visitor. It may be that they’re more interested in the subject, or they may be trying to become influencers, or they may be working through other issues that make them want to talk. Whatever the cause, they’re not average.

In most online forums, this self-selection bias doesn’t have a big negative effect. It means the reviews on a travel or restaurant review site will be written by people who are especially enthusiastic about traveling or eating out, but usually that’s OK. However, in a mass social network like Twitter or Facebook, it has a profound effect on discussions about social issues and politics. The people who post heavily on those subjects tend to be extremists, the angriest and most polarized members of society.

For example, just 2% of the US population writes 97% of the Twitter posts on national politics, according to the nonpartisan Pew Research Center (link). Those 2% “political tweeters” are not representative of the country as a whole. Pew found they’re more extreme in their politics and are more likely to be hostile toward members of the opposing party.

This is not a conspiracy theory. I’m not saying 2% of the population got together and decided to hijack the country. It’s just the way people behave on online. Similar biases happen in every social media platform. Add them together, and the voices we hear on social media are systematically far more polarized, shrill, and confrontational than the population as a whole.

But I don’t think social media alone could create the divisions we see in society. Huge chunks of the population don’t pay attention to social media. About 78% of Americans don’t have Twitter accounts, and only 4% of Americans say social media is their main source of political news (link)

This larger problem is that the mainstream press, which far more people do rely on for news, too often treats social media as an opinion poll and source of stories. The things that reporters see online often shape news stories and drive the selection of what gets covered. We’re bombarded with reports about issues that are trending on social media, or quotes from someone who posted something offensive online. Let me give you a couple of examples of how pervasive this problem is…

Here’s a screen capture of the top story on the Fox News website on the day after Supreme Court Justice Ruth Bader Ginsburg died:

If you read the story, it tells you that several people online threatened to protest violently, and a Canadian lawyer and a Fox news host both said that’s a bad thing. I agree, it is a bad thing, but how did it become the top national news story? Given who writes most posts, the real news would have been if no one said anything offensive online (link).

Lest you think I’m picking on the right wing, check out the New York Times’s coverage of the Pence-Harris vice presidential debate. On the home page, above the fold, there was an article describing online comments about the debate. Before quoting from a bunch of Twitter posts, the reporter described the Internet as “that insular, blue-check-verified version of the proverbial diner in a steel town” (link).

The trouble with that sentence is that the Internet actually is nothing like a diner in a steel town. If you went to a diner in a steel town you’d get a random sample of working class people in middle America. When you go to Twitter you get a self-promoted sample of angry people and celebrities, all looking to get noticed by saying something snarky. It’s not a survey, it’s performance art. 

I’m not saying the press is evil. Most reporters are trying to be professional and are under huge stress from the changing economics of the news. But social media is seductively convenient to a reporter. You feel like you’re in touch with the mainstream of society when actually you’re soaking in a hot tub with a bunch of fanatics. I think many reporters are spending way too much time in the tub.

When we make the things that people say online into news, we’re missing what’s really happening in the country. We amplify the voices of the extremes, and play them back as if they represented the center of society.

Our constant diet of anger and division are breaking down the dialog that makes a society livable. More in Common found that about 77% of Americans are in an “exhausted majority” caught between the extremes. They are willing to work together and compromise to solve the country’s problems, but feel intimidated and shouted down by people with extreme views. More in Common wrote:

“We don’t seem to disagree anymore without perceiving another person’s views as stupid, wrong or even evil. We’re being played off each other; and told to see each other as threats and enemies, not Americans just like us but with separate experiences and views. The loudest and most extreme voices get heard, and others just feel like tuning out altogether.”


We need to change the dialog

If the problem is built into social media and the press, how do we solve it? We can start by getting more exposure to regular people. If we heard from each other more often, the extremists couldn’t make their caricatures stick.

My coworkers and I are launching a project to help regular Americans listen to one another. Called the Human Empathy Project, it’s a website where we share the voices of regular people discussing their views and problems (link). Through videos in which they get to do the talking, you’ll understand where we actually agree and disagree, and the reasons why. Sometimes we’ll dig into the reality behind an issue that’s generating controversy. Sometimes we’ll just feature people talking about their lives and challenges, so you can get a better understanding of other Americans.

We’re launching the site with three sets of videos:

•  The views and concerns of undecided voters in the 2020 Presidential election (they’re far more nuanced and informed than the stereotype) (link)

•  Messages that regular Americans want to send to the country (we asked a balanced mix of Republicans, Democrats, and independents, and I bet you’ll have trouble telling the difference between them) (link)

•  A look at the hardships being caused by the pandemic (they vary tremendously from person to person) (link)

In the weeks to come we’ll post additional videos on other important issues. I think you’ll find that your real neighbors are far more reasonable and thoughtful than you realized. We sometimes disagree deeply, but there are usually understandable reasons for those disagreements, the sort of things you could work through over a cup of coffee. And I promise we won’t take sides against anyone — except maybe the people who tell you that the other side is evil and that you have to destroy them.

Please join us, listen to each other, and get to know the real America.

Note: Although we used the UserTesting system to collect these videos, the opinions expressed here are my own and do not represent the company.


Copyright 2013 Michael Mace.
17 Oct 05:21

Some Notes on Docker Up and Running (Day 1)

by Reverend

Earlier this week I participated in a 3 hour class offered through O’Reilly’s Live Online Training platform to push myself to get more familiar with Docker. The course was called “Docker: Up and Running” and was taught by Sean Kane, who happens to be an architect at New Relic which is an application Reclaim Hosting has become huge fans of as of late. The push to get more familiar with Docker has been precipitated by the launching of Reclaim Cloud this summer for sure, but also by the fact that I finally have a bit of free time to dig in more deeply (thanks Lauren!). I have been nibbling around the edges of Docker just figuring out enough to get by (or asking Tim), but I was hoping this course would provide me with a more fundamental understanding and I was not disappointed.

I will not go over the details of day 1 at length, but I do want to highlight a couple of things that helped me to conceptually think through Docker and containers more generally. First off, when describing the value of containers Kane noted that for him, more than security, isolation, etc. (which were all factors), the real power of containers generally, and Docker specifically, was repeatability: the idea that so many people across multiple environments, can get software up and running with one command. This helped me tremendously. I know it, but I have not been able to articulate it like this, and I appreciated him doing so. This is the power behind Reclaim Cloud, a platform that provides folks the ability to launch and host an application that before would be a one-off, bespoke environment that was virtually impossible to replicate or migrate easily. Whereas applications using containers are not only eminently portable, but endlessly repeatable. Which is why it has been fairly easy for us to spin up a whole bunch of one-click applications in the Reclaim Cloud marketplace based on existing Docker containers. This idea allowed another session I sat in on a couple of weeks back, Scaling a Data Science MOOC with Digital Ocean, to make sense beyond the specifics of the Data Incubator example. The idea of using Kubernetes to orchestrate an environment with 20K+ Jupyterhub containers is premised entirely upon this core idea of repeatability at the heart of this new era of infrastructure. How do you quickly spin up 20K+ identical applications specifically for one course? Using containers.

The other bit that might seem obvious to some, but was a terminology issue for me, was distinguishing between images and containers. The image is the core application that can be repeated innumerable times, in other words the image you pull on your server and then run as a container. It is this distinction between the static image and the running container wherein the difference lies. And the way it which you optimize an image has everything to do with how quickly you can get x-amount of containers up and running, which becomes a crucial question of efficiency when you are scaling a course to 20K+ Jupyterhub containers. Optimizing a manifest file for an environment will be the difference between several seconds and several minutes of creation time. With this, the Docker infrastructure started to make some more conceptual sense to me, and that was worth the time because I can often find myself get lost in the command line details of an issue rather than seeing the whole of the environment.

docker container run -d --rm --name balance_game --publish mode=ingress,target=80,published=80
jimgroom/balance_game:latest

Additionally, a few highlights from day 1 was pushing an image of the Docker Balance game to my own Docker Hub page, which I then used the following command in a Docker Engine instance on Reclaim Cloud to get running online. That felt pretty awesome, and if you are so inclined you can play the Docker Balance game:

I’ll write about Day 2 once I work through that video given I was not able to watch that one live given a couple of meetings and other demands. But having sat-in day 1 live, I am certain watching the Day 2 archive will not impact the experience at all, which is an interesting realization pedagogically—at least for the delivery of this course.

17 Oct 05:13

CoVid-19: Let’s Admit We Still Don’t Know

by Dave Pollard

This is the 13th in a series of articles on CoVid-19. I am not a medical expert, but have worked with epidemiologists and have some expertise in research, data analysis and statistics. I am producing these articles in the belief that reasonably researched writing on this topic can’t help but be an improvement over the firehose of misinformation that represents far too much of what is being presented on this topic in social (and some other) media.

Throughout the current pandemic, there have been ‘experts’ of all kinds trying to build a reputation (political, professional, social) by badmouthing what the world’s public health officers have been saying, doing, and urging the public to do, in the face of CoVid-19.

Some of these have been conspiracy theorists and their repeater-drones, who are not worth wasting further effort engaging. But others are seemingly well-meaning, intelligent, mentally balanced people. There is a propensity I think in most of us to challenge popular wisdom: “Everything you thought you knew about X is wrong:”. In many cases, when that skepticism is based on reason, research and reasonable doubt, such challenges can be very useful, even essential. But in other cases (as with the climate collapse skeptics) they can cause harmful confusion and delay, and destroy the morale of people doing important work, especially in a crisis.

I’ve been critical of the anti-mask professionals (“There’s no definitive proof they do any good”), and the anti-social distancing and anti-shutdown advocates. Our society has suffered immensely from our seemingly inherent incapacity to exercise the precautionary principle, even in times of crisis. The majority’s collective global response to CoVid-19 to date has been a rare positive signal that we can exercise this principle, and this discipline, when we have a mind to do it. If we can, then there’s hope for resilience in the crises to come.

What the “until there’s unequivocal proof I won’t accept it” argument leads to is the antithesis of resilience. It leads to resistance to change, inability to adapt, and refusal to give others, and perilous situations, the benefit of the doubt. Whether it’s spoken by doctrinaire conservatives or doctrinaire libertarians, in sufficient numbers it can paralyze our capacity to deal effectively with complex situations of all kinds. It leads to dithering, backsliding, and denial that there’s a problem — the kind of thinking and emotional responses that have been so much on display among the political leaders of the US, UK and Brasil in particular in the face of CoVid-19 and all the other crises that have arisen recently and will become more and more commonplace in the years ahead.

Public health leaders have been saying for decades that we need to be prepared to take drastic, immediate, global action to deal with the threat of pandemics in our ever more crowded and ever more mobile world. Except in a small handful of countries that have breezed through the current crisis, they have been largely ignored.

What the latest group of nay-sayers are now proclaiming is that the mortality of CoVid-19 is evolving to be much lower than was thought or feared, and as it moves more aggressively through younger populations around the world the death toll will prove to be much lower than the 0.68% IFR (translating to 50 million global deaths) that scientists and public health experts have been computing it to be, and assuming, in the absence of better estimates, it to be. It is possible they are correct. It could be as much as five times lower (0.13%, translating to only 10 million global deaths at most).

In a typical year a half million globally die of influenza viruses (50,000 in the US); most of those who die are old and/or suffer from complicating medical conditions, and for that reason the ’cause’ of death may be complicated and hence the numbers are very imprecise). So if the upper limit of global CoVid-19 deaths is 5 million a year over a couple of years, that’s only 10 times the normal influenza rate, and it’s only half the number of people who die each year from cancer. Still a pandemic, and awful, but not the historical catastrophe many feared.

What this group then goes on to suggest, however, is that public health experts and politicians knew this to be the case, and have used the “pandemic” as an excuse to grab executive power and throttle civil liberties. This implication is IMO dishonest, dangerous, and an insult to our public health community and the health care professionals dealing with the disease.

If it turns out that CoVid-19 is only, at worst, ten times as bad as a normal flu season, the obvious questions are (a) how could we have been so wrong, and (b) if we’d known, what would we have done differently?

For a start, we really have little way to know what the actual infection and death toll numbers are. Data collection methods in most places are crude to non-existent, decisions on what ’caused’ a particular death (especially with other preconditions) can be all over the map, and there was (and still largely is) no accurate method to detect who is infected and who is infectious, though we’re getting better at it. So estimating the IFR is a wildly imprecise and ever-changing exercise. Due to lack of testing there were early estimates that IFR could be as high as 3-4%, though opinions varied by as much as an order of magnitude. We are still wildly behind in processing what little credible death data we have, and will never know how many CoVid-19 deaths we missed, though several studies suggest that we’ve missed at least a third of them in our reports in most affluent nations, and 90% of them in some struggling nations, based on excess deaths data. Even with the benefit of hindsight, estimated deaths from the 1918 H1N1 pandemic still vary by an order of magnitude, from 10 million to 100 million — the 50 million most often cited is really, still, just a crude guess.

The data from the SARS-1 and MERS coronaviruses, the two most recent “novel” coronaviruses (novel in that they’re not “common” like the many common cold coronaviruses), suggest that they had terrifyingly high IFRs of 10-50%, but fortunately were not very easily transmitted, so keeping the number of infections under tight control (isolation) allowed the death toll to be reduced to a tiny proportion of what might have occurred if it had had the transmissibility of CoVid-19.

So when China announced CoVid-19, there was understandable terror that this might be “the big one” that epidemiologists worldwide had feared. The combination of high virulence (IFR) and high transmissibility is a nightmare, with the potential to lead to a repeat of 1918 or the global plagues of much earlier times, or worse.

It should therefore be no surprise that the world’s public health leaders, with the best available information, strenuously recommended lockdowns, social distancing, and (in some cases) masks to mitigate this risk. The explosion of deaths, overloaded hospitals and huge positivity rates in March and April did nothing to ease these fears. Some surveys in several countries suggested that the IFR looked to be about 1%, while others suggested it was between 0.2% and 2%. Public health advice was based on the 1% assumption, which seemed consistent over several months — the best guess at the time.

More recently we’ve discovered that many people contract the disease asymptomatically, and including them in the case count lowered the best guess of the IFR to about 0.68%, which  is the estimate that is still being used by health authorities as more data comes in. As noted above, a 0.68% IFR in a highly transmissible disease (ie where everyone will be exposed until a herd immunity level where around 70% of the population has been infected and is hence immune, is reached) translates to a potential death toll of 0.68% x 7.8 billion = 50 million, less whatever proportion is not affected due to herd immunity, and barring an effective vaccine. That’s less than the potential 80 million deaths that a 1% IFR could have produced, but it’s still in 1918 range in total numbers killed, and two orders of magnitude more than “normal” annual influenza deaths. Worth a lockdown? Most people thought so.

By the end of September, the global reported death toll was more than a million, with estimates of global actual deaths averaging closer to twice that number. Reported deaths for the past four months have been very consistent at about 180,000/month, and reported cases and positivity rates in several early-hard-hit areas have been spiking upwards again. More troubling, even though the reported newly infected were much younger and expected to die in much smaller numbers when infected, CoVid-19 hospitalization numbers in the US and some other areas jumped back up to record levels in the late summer. But reported deaths did not rise nearly as much, causing some to question whether more people had actually been infected without incident than earlier thought (although models already suggested that as few as 10% of infections had actually been reported). Could the underreporting of cases be even more extreme than thought, and could the IFR be much lower than what the data so far had been pointing to?

That’s where we are now. The biggest challenge is that our tools for testing who has been infected are still limited, and not terribly reliable. Antigen tests until recently had suggested that even in the hardest-hit areas, only 20% of the population had been infected (far below herd immunity levels), and that worldwide on average only about 2% had been infected — still supporting the 0.68% IFR (2% x 7.8B x 0.68% = 1 million global deaths).

It was strange, therefore, to hear the head of the WHO suggest recently that as much as 10% of the world’s population, five times as many, had been infected. This suggests they believe that the actual IFR could be five times lower, about 0.13%, and that we’re looking at a maximum of 10 million CoVid-19 deaths, not 50 million. Indeed, assuming that the current reported deaths continue at a 180,000/month rate, ie 2 million a year, then it would be a surprise to hit even 10 million deaths before a vaccine is perfected, even assuming a third to a half of CoVid-19 deaths remain unreported. And indeed, some studies in India, which has many CoVid-19 deaths but surprisingly few deaths per million people, suggest that as many as a half of the people in parts of that country have been infected.

Many are not convinced. The IHME continues to use the 0.68% IFR and projects that by the end of January global reported deaths will hit 2.5 million, and the daily death rate will be 16,000, three times the current rate, which, if it continues unabated, would lead to 6 million more deaths next year until and unless an effective virus is implemented. Without sustaining at least some social distancing and masks restricting the spread, they say, the daily death rate will soar to 50,000 by January, an annual rate of 18 million more reported deaths. On this basis, it is easy to understand why public health experts feared that as many as 50 million might die without restrictions in place, if an effective vaccine was still years away.

But suppose they’re wrong, and the analyses that supported the 0.68% IFR were flawed, which is possible for a host of perfectly understandable, benign reasons. Suppose the antigen and other tests done to date have missed most infections, and that as the WHO is now proposing, 10% of the world’s population has already been infected and the actual IFR is hence only 0.13%. Suppose too that an effective vaccine will have been universally administered by December 2021. And finally, let’s suppose that by the end of this year, with death rates flat and hospitalizations manageable, pandemic fatigue has set in and we allow the disease to more or less run its course, continuing to ease restrictions in most of the world.

If herd immunity cuts in at 70%, that would mean the final death toll is likely to be 0.13% x 7.8B x 70% = 7 million deaths. That corresponds to about 300,000 deaths in the US and 35,000 in Canada. If the 0.13% is correct then that means already nearly half the US population has been infected, so it would be 2/3 the way to herd immunity. It would mean that about 20% of Canadians have already been infected, so it’s less than 1/3 the way to herd immunity. Another 25,000 deaths may well be unacceptable to Canadians (only 10,000 have died here so far, mostly in Québec), in which case the US may shrug and return to business as normal, while Canadians may well keep restrictions in place.

But if we do reach a consensus that this disease spread five times faster and farther than we thought, and is hence much less lethal than we thought, what does that mean for our assessment of public health’s performance to date, and what to do next?

To the conservatives and libertarians who think the whole exercise was overreach, the knee-jerk reaction is that the economy was shuttered unnecessarily, and putting public health in charge of decision-making, instead of the usual corporatist clowns, was a mistake. I think most public health leaders and organizations (except in the US, where absurd underfunding and political interference has seriously undermined the credibility of most of its public health institutions) has performed masterfully throughout this crisis. If you look at the chart at the top of this post, you can see that they were led (by the best data available at the time) to believe that the choice was between scenarios (1) and (2) shown in blue — ie between a “herd immunity” strategy (1), with 20 or 50 million likely to die, or a severe restrictions strategy (2), with a worst-case outcome of around 8 million deaths. Their choice was obvious, and responsible.

But suppose we now find out that the IFR is actually 0.13%, one fifth of what everyone thought it was, and that nearly half of Americans (except in isolated places like North Dakota, which this month has the highest daily death rate per million in the world) have already been infected? In that case we set scenarios (1) and (2) aside and look instead at 0.13% IFR scenarios (3) — “let herd immunity stop it” — and (4) — “let’s keep the restrictions on to save unnecessary deaths”. That’s a 7 million versus 3.5 million deaths choice. I know what my choice would (still) be. A lower IFR would be great news, and it would change nothing insofar as what should have been done and what we should continue to be doing.

I’d like to go further and see masks made mandatory everywhere — not as a criminal offence for the non-compliant but as a global social convention, a small sacrifice to tell the world we care about each other’s health and lives more than we care about a minor physical discomfort. That could save another million lives, and if it were universally accepted it might well stop this disease cold in its tracks — new cases reduced to zero, virus extinct — in a matter of weeks or at most a few months. Would that ever be a lesson for when (not if) the “big one” (the pandemic that is both highly virulent and highly transmissible, with deaths potentially in the billions) inevitably hits us. If we’re not careful, we now may end up becoming more complacent about pandemics if this one turns out to be “not so bad”, and that could prove disastrous in preparing for and addressing the next one.

A few other points that I’ve made over and over that still bear repeating:

  1. We still don’t know how this disease kills and sickens us. We’re still too busy fighting the fires to address how much damage they’ve done — how much damage the virus has done to our lungs, brains, hearts and other virus-susceptible organs. Even asymptomatic infected people have shown signs of concerning organ damage whose long-term effect is unknown. In a recent study 88% of CoVid-19 patients hospitalized showed permanent “lung abnormalities” and 75% were still struggling with serious symptoms three months after admission.
  2. Viruses can mutate, and new ones thrive among overcrowded breeding grounds like our cities and residential institutions (like prisons), especially when the inhabitants are chronically or acutely unwell. The second wave in 1918 did far more damage than the first, despite preparedness, and it mostly killed the young and healthy. We must be much better prepared for the viruses to come. It’s not that expensive.
  3. Pandemics were rare before 1918, and back then depended on ignorance and unsanitary conditions. They are common now, for three reasons we show no willingness or capacity to address — (1) factory farming, which has been the likely breeding ground for most recent “novel” virus strains, as well as being the source of poisoned, toxic foods that cause many times more human deaths and chronic diseases than pandemics; (2) the plundering of the planet’s last wilderness areas, which house untold numbers of viruses and bacteria to which humans have zero immunity; and (3) the exploitation, for food and quack medicines, of exotic creatures (like bats and pangolins) with very different immune systems from ours, which likewise expose us to all the viruses and bacteria to which they’re immune, when we “consume” them. Until we deal with these causes of pandemics, we had better damned well be ready to sacrifice convenience and profit to deal with their consequences.
  4. Much has been made about the “trade-off” between health and economy. In case anyone hasn’t been paying attention, our industrial growth economy is killing our world, poisoning and exhausting its life forms, its soils, its air and its water, to the point it’s precipitated the sixth great extinction of life on earth. And all of it — all of it! — accrues to the benefit of the richest 1%, simply to keep their stocks, land, products, options and bonuses growing in value. A temporary halt to that madness isn’t a sacrifice; it’s a chance to take stock of whether the way we live our lives is part of the solution or part of the problem, and hopefully start to remedy it.

Our public health and health care workers are doing their best under terribly trying conditions, and using very, very imperfect information. Let’s please give them the benefit of the doubt and stop with the blaming and shaming and second-guessing of ulterior motives. Nobody knows what’s going on. We’re just trying to figure it out, with everything we’ve got.

Thanks for reading. Please vote — British Columbians Oct 24th; Americans Nov 3rd, if not before.

17 Oct 02:13

How Can Linking to an Article be Immoral When the Media Source Itself Does the Posting?

Michael Geist, Oct 16, 2020
Icon

In recent weeks there have been increasing calls for websites (and especially Google and Facebook) to pay news organizations for links to their news content. Recent laws (and actual agreements) in France and Australia have underscored this. Now we have news that "Canadian Heritage Minister Steven Guilbeault has called the practice of linking to news articles on social media sites such as Facebook 'immoral' and indicated that he plans to establish a new mandated licensing requirement." But as Michael Geist points out, note only do news publishers "actively encourage their users to post links", they actually post the links themselves. I'm not surprised; I get numerous requests myself, which I review (except for the 'guest articles', which are never going to appear on my site) and post as appropriate. I like getting suggestions, but not if I have to pay for them. I will never pay to quote and comment on a link, and would point out to the Minister that such a law if enforced broadly would destroy teaching and scholarship.

Web: [Direct Link] [This Post]
17 Oct 02:13

Building an Evernote to SQLite exporter

I've been using Evernote for over a decade, and I've long wanted to export my data from it so I can do interesting things with it.

Ideally I'd use their API for that, so I could continually export new notes via a cron. Unfortunately the API is implemented using Thrift (the only public API I've seen to use that) and my attempts to speak to it from Python have so far failed to get anywhere.

Last weekend I decided to try using their "Export notes" feature instead, and its ENEX XML format.

ENEX - the Evernote XML export format

Select a note - or multiple notes - in the Evernote macOS desktop app, use the File -> Export Notes menu item and Evernote will create a Notes.enex exported XML file.

These files can get BIG, because any images attached to your notes will be embedded as base64 encoded binary data within the XML. My export is 2.8GB!

When faced with unknown XML like this it's useful to get a high level overview of what tags are present. in 2009 I wrote a Python script for this - as part of this project I updated it for Python 3 and pushed a release to PyPI.

Here's a subset of the output when run against my Notes.enex file (full output is here).

{
    "note": {
        "attr_counts": {},
        "child_counts": {
            "content": 2126,
            "created": 2126,
            "note-attributes": 2126,
            "resource": 2605,
            "tag": 45,
            "title": 2126,
            "updated": 2126
        },
        "count": 2126,
        "parent_counts": {
            "en-export": 2126
        }
    },
    "note-attributes": {
        "attr_counts": {},
        "child_counts": {
            "altitude": 1466,
            "application-data": 449,
            "author": 998,
            "classifications": 51,
            "content-class": 387,
            "latitude": 1480,
            "longitude": 1480,
            "reminder-done-time": 2,
            "reminder-order": 2126,
            "reminder-time": 1,
            "source": 1664,
            "source-application": 423,
            "source-url": 85,
            "subject-date": 10
        },
        "count": 2126,
        "parent_counts": {
            "note": 2126
        }
    }
}

This shows me that every note is represented as a <note> element, and crucial metadata lives in children of a <note-attributes> child element.

(I thought I'd need to tweak the script for performance since it works by loading the entire file into memory, but my laptop has 32GB of RAM so it didn't even blink.)

ENEX limitations

I ran into some significant limitations while working with my ENEX export.

  • Notebooks are not represented in the file at all - you just get the notes. So I can't tell which recipes were in my "cooking" notebook. It's possible to work around this by manually exporting the notes from each notebook one at a time and storing them in separate export files. I didn't bother.
  • Some of the data in the file - the note content itself for example - consists of further blocks of XML embedded in CDATA. This means you have to run a nested XML parser for every note you process.
  • The notes XML <en-note> format is mostly XHTML, but includes custom <en-media hash="..."> tags where inline images should be displayed
  • Those inline images are CDATA encoded base64 strings. They don't include a hash, but I figured out that decoding the base64 string and then running it through MD5 generates the hash that is used in the corresponding <en-media> tag.
  • Notes in the export don't have any form of unique ID!

That last limitation - the lack of unique IDs - is a huge pain. It means that any export is by necessity a one-time operation - if you edit a note and attempt a re-export you will get a brand new record that can't be automatically used to update the previous one.

This is particularly frustrating because I know that Evernote assigns a GUID to each note - they just don't include them in the export file.

Writing an exporter

My Dogsheep family of tools aims to liberate personal data from all kinds of different sources and convert it into SQLite, which means I can explore, query and visualize it using Datasette.

dogsheep/evernote-to-sqlite is my latest entry in that series.

I based it on healthkit-to-sqlite because that, too, has to deal with a multiple GB XML file. Both tools use the Python standard library's XMLPullParser, which makes XML elements available as a stream without needing to load the entire file into memory at once.

The conversion code is here. It ended up being relatively straight-forward, using ElementTree to extract data from the XML and sqlite-utils to write it to a database.

It implements a progress bar by tracking the number of bytes that have been read from the underlying file XML.

One optimization: originally I created a single resources table with the resource metadata and a BLOB column containing the binary image contents.

This table was huge - over a GB - and had very poor performance for operations such as querying and faceting across unindexed columns - because any table scan had to work through MBs of binary data.

My personal Evernote notebook has around 2,000 resources. 2,000 rows of metadata should usually be very fast to query.

So... I split the binary data out into a two column resources_data table - md5 primary key and a BLOB for data. This massively sped up queries against that resources table.

And it works! Running the following command produces a SQLite database file containing all of my notes, note metadata and embedded images:

evernote-to-sqlite enex evernote.db MyNotes.enex

Serving up formatted notes

Figuring out the best way to serve up the note content in Datasette is still a work in progress - but I've hacked together a delightfully terrible way of doing this using a one-off plugin.

Datasette plugins are usually packaged and installed via PyPI, but there's an alternative option for plugins that don't warrant distribution: create a plugins/ directory, drop in one or more Python files and start Datasette pointing --plugins-dir at that directory.

This is great for hacking on terrible ideas. Here's the evernote.py plugin I wrote in full:

from datasette import hookimpl
import jinja2

START = "<en-note"
END = "</en-note>"
TEMPLATE = """
<div style="max-width: 500px; white-space: normal; overflow-wrap: break-word;">{}</div>
""".strip()

EN_MEDIA_SCRIPT = """
Array.from(document.querySelectorAll('en-media')).forEach(el => {
    let hash = el.getAttribute('hash');
    let type = el.getAttribute('type');
    let path = `/evernote/resources_data/${hash}.json?_shape=array`;
    fetch(path).then(r => r.json()).then(rows => {
        let b64 = rows[0].data.encoded;
        let data = `data:${type};base64,${b64}`;
        el.innerHTML = `<img style="max-width: 300px" src="${data}">`;
    });
});
"""

@hookimpl
def render_cell(value, table):
    if not table:
        # Don't render content from arbitrary SQL queries, could be XSS hole
        return
    if not value or not isinstance(value, str):
        return
    value = value.strip()
    if value.startswith(START) and value.endswith(END):
        trimmed = value[len(START) : -len(END)]
        trimmed = trimmed.split(">", 1)[1]
        # Replace those horrible double newlines
        trimmed = trimmed.replace("<div><br /></div>", "<br>")
        return jinja2.Markup(TEMPLATE.format(trimmed))

@hookimpl
def extra_body_script():
    return EN_MEDIA_SCRIPT

This uses two Datasette plugin hooks.

render_cell() is called every time Datasette's interface renders the value of a column. Here I'm looking for the <en-note> and </en-note> tags and, if they are present, stripping them off and marking their inner content as safe to display using jinja2.Markup() (without this they would be automatically HTML esacped).

extra_body_script() can be used to inject additional JavaScript at the bottom of the page. I'm injecting some particularly devious JavaScript which scans the page for Evernote's <en-media> tags and, if it finds any, runs a fetch() to get the base64-encoded data from Datasette and then injects that into the page as a 300px wide image element using a data: URI.

Like I said, this is a pretty terrifying hack! But it works just fine, and my notes are now visible inside my personal Datasette instance:

One of my notes displayed in Datasette, with an embedded image

Bonus feature: search by OCR

An interesting feature of Evernote is that it runs cloud-based OCR against images in your notes, enabling you to search across the text contained within them.

It turns out the output of this OCR process is included in the export XML. It looks like this:

<recoIndex
  docType="unknown"
  objType="image" objID="76dd28b07797cc9f3f129c4871c5293c" 
  engineVersion="7.0.24.1" 
  recoType="service" 
  lang="en" 
  objWidth="670" 
  objHeight="128">
  <item x="26" y="52" w="81" h="29">
    <t w="76">This</t>
  </item>
  <item x="134" y="52" w="35" h="29">
    <t w="79">is</t>
  </item>
  <item x="196" y="60" w="37" h="21">
    <t w="73">so</t>
  </item>
  <item x="300" y="60" w="57" h="21">
    <t w="71">can</t>
  </item>
  <item x="382" y="54" w="79" h="27">
    <t w="77">test</t>
  </item>
  <item x="486" y="52" w="59" h="29">
    <t w="82">the</t>
  </item>
  <item x="570" y="54" w="59" h="25">
    <t w="74">OCR</t>
    <t w="33">DeR</t>
    <t w="15">OCR!</t>
    <t w="14">OCR]</t>
  </item>
</recoIndex>

As far as I can tell an <item> gets multiple <t> elements only if the OCR wasn't 100% sure.

evernote-to-sqlite creates a searchable ocr text field using values from that XML, so I can search my images in Datasette.

17 Oct 02:13

Online learning during COVID-19: 8 ways universities can improve equity and access

Nadia Naffi, Ann-Louise Davidson, Azeneth Patino, Brian Beatty, Edem Gbetoglo, Nathalie Duponsel, Academic Matters, Oct 16, 2020
Icon

This is a useful document and a worthwhile initiative, though I would want to make some significant changes. It's based to a large degree on the OECD report on Equity and Quality in Education, which for the most part stresses addressing inequitable outcomes in schools with better quality teachers and instruction (rather than, say, addressing social and economic inequality). What caught my eye in this report was its working of equity in higher education. It consists of three statements all beginning "All students are..." and I would want to change the word 'students' to the word 'people'. Because equity and access begin with the people who are outside the system, not within it.

I appreciate the authors' very necessary focus on systemic racism and the impact of Covid on disadvantaged students, and I don't want to be overly critical, but I feel the recommendations as a whole could be less prescriptive and more empowering, and should include redress of economic inequality. For example, the definition of "accessible materials" should include free and open access resources, the recommendation to "use university and institutional IT department-supported digital technologies" should change 'use' to 'provide' and be refactored to provide choice and ownership, and "a flexible approach to student participation" should again enable choice (some people (like me) like three hour sessions (that's still how I sometimes learn today)).

Web: [Direct Link] [This Post]
17 Oct 02:12

Kevin's Gravel Packer Pass Hunter

by noreply@blogger.com (VeloOrange)
by Kevin


As we near the launch of the new Pass Hunter, we have received a number of questions from customers interested in racks and bags to pair with the frame. We envision the Pass Hunter as an ideal "sport touring" bike, well-suited for a spirited weekend jaunt unencumbered by heavily loaded front and rear panniers. Pushing the concept a bit further, I wanted to see how the frame could perform as a pseudo-bikpacking rig. With a weekend gravel tour as the test, I dreamed up my minimalist gravelpacking (is this a thing or did I just make that up?) setup.


I primarily am a road cyclist, and so my original Pass Hunter build was designed around Shimano's Ultegra R8000 drivetrain and a set of 700c wheels with TRP Spyre brakes. I went with the widest Ultegra cassette offered and paired with our compact Drillium crank. I wanted a build geared for road climbing--this is a Pass Hunter, after all. And that noir Drillium crank just looks sweet, doesn't it? (The crank is on sale now, by the way).


For bars I went with our Nouveau Randonneur drops mounted to our Tall Stack stem. A 0 Setback Seatpost and Brooks C15 saddle round out the cockpit. The Cambium saddles can be a bit divisive, but I've had mine for years and tend to swap it around between my touring setups. Our noir Moderniste bottle cages complete the black and blue look, with a gold seat collar and brass stem cap (a close enough color match) providing a dab of glitz. 


When preparing the bike for gravelpacking, I swapped out the 700c wheels for a set of Shimano GRX 650b wheels. These were ready to go with a set of Teravail Rampart tires (tubeless) in 47 mm. For me, this was a major jump from the 32 mm max I am used to. I've never claimed to be down with the #supplelife, but I have to admit the supple lifers might be onto something. 



For luggage, I wanted to keep things simple and compact. I reached out to our friends at Roadrunner Bags and settled on their Jumbo Jammer handlebar bag and Fred saddle bag. This proved to be just enough space for a three-day tour, with room for my camping gear, a change of clothes, snacks, and some other miscellaneous gear. If needed, a frame bag or some fork-mounted cargo cages could easily expand carrying capacity.


After a weekend in the mountain backcountry (read my ride report here), I at times found the road groupset to be a bit outmatched. A gravel-centric 1x system might ultimately be the only change I'd make for a similar tour in the future. Otherwise, I was quite pleased. I don't see myself going back to a rack and pannier system anytime soon. Check out the complete build list under our Bike Build Ideas page.

17 Oct 02:10

Harold Craft's 1970 doctoral dissertation "Radio observations of the pulse profiles and dispersion measures of twelve pulsars", page 215. This was the inspiration for Joy Division's 1979 "Unknown Pleasures" album cover art. pic.twitter.com/GwyTOlEK4I

by Things from the past 📷🎥 (moodvintage)
mkalus shared this story from moodvintage on Twitter.

Harold Craft's 1970 doctoral dissertation "Radio observations of the pulse profiles and dispersion measures of twelve pulsars", page 215. This was the inspiration for Joy Division's 1979 "Unknown Pleasures" album cover art. pic.twitter.com/GwyTOlEK4I





1242 likes, 299 retweets
17 Oct 02:09

I’ve signed up twitter.com/slowwaysuk/sta…

by Chris Kendall (ottocrat)
mkalus shared this story from ottocrat on Twitter.

We're creating a network of #SlowWays walking routes that connect all of Great Britain's towns and cities.

During lockdown 700 volunteers created over 7,000 routes.

Later this year we need 10,000 people to help explore and test them. Up for it? slowways.uk 💚🐌🚶🏿‍♀️ pic.twitter.com/gUz1UcaA9N






81 likes, 35 retweets



14 likes, 5 retweets
17 Oct 02:09

RT @gavinesler: How can you possibly be surprised that a medium sized country of 67 million people is finding it difficult persuading a tra…

by Gavin Esler (gavinesler)
mkalus shared this story from ottocrat on Twitter:
The truck has decided to play chicken with the flu. I am sure the truck will veer at the last second.

How can you possibly be surprised that a medium sized country of 67 million people is finding it difficult persuading a trading bloc of 400 million plus that “we hold all the cards?” It’s one of the obvious and permanent consequences of Brexit and it has been obvious for years twitter.com/davidghfrost/s…

2/3 Also surprised by suggestion that to get an agreement all future moves must come from UK. It's an unusual approach to conducting a negotiation.




2675 likes, 613 retweets

Retweeted by Chris Kendall (ottocrat) on Friday, October 16th, 2020 6:29am


5336 likes, 1146 retweets
17 Oct 02:07

Internet Images of Old Trains

by Stephen Rees

“You can now head over to a new collection at Flickr and search through an archive of 2.6 million public domain images, all extracted from books, magazines and newspapers published over a 500 year period. Eventually this archive will grow to 14.6 million images.”
source: Open Culture

So, of course, the first thing I did on reading that was search for train pictures, and hit gold with the first try.



Image from page 188 of “Electric trains” (1910)

Westinghouse Motor-Coach Complete.{Heysham Branch of the Midland Railway.}

View of Train consisting of Siemens Motor-Coach and Two Trailers.

Liverpool Overhead Railway 1884
LNWR coaches with District Railway Electric Locomotives 1884
 Ramsay Condensing Turbine Electric Locomotive 1910

Basically the point of this exercise is to remind me to go look here next time I need an image.



17 Oct 02:07

Seven Months of Sheltering in Place

by Richard

Since last time I checked in, I had a couple of days of vacation in the same week, meaning two 3-day weekends in a row. I spent one of those days on Toronto Islands again. I played several location-based games, and even ground-truthed a couple of microconfluences. The other day I kept open in case I was needed for helping with an online event the Icelandic Canadian Club held. It was a great success!

I've gotten the hang of wearing a mask, and most shops are open and tensions are a lot lower than they were up until about June. I didn't need to line up outside of a grocery store the few times I needed to go, not that I spent a lot of time waiting to get in. Winter is coming, and I hear people being worried about having to stay outside in the cold before getting provisions, but I don't anticipate that being a problem. I'm more comfortable with buying things again, though almost all online. I recently looked at my new year's intentions for 2020, and other than a big trip that I never got too deep into planning, I've more or less done everything I set out to do this year. That's been somewhat of a surprise.

In late September, I dined in a few times, once at Five Guys Burgers and Fries, another couple of times at the diner across the street from me. Up until then, I had dined exclusively at my apartment (either my own cooking or takeout), or the "balcony" outside the diner across the street. Other than those cases dining at the restaurant on Toronto Islands (both times outside), I did not dine at restaurants. Dining alone is bad enough to begin with. Dining alone and having my guard up was worse.

The gods smiled on me, and while I approached the bike rides along the Lakeshore Boulevard Activeway of September 26th and the 28th as the last of the year, the City of Toronto extended them into October. There are noticeably fewer people on the roads, but Ontario announced increased restrictions on gatherings, calling it Modified Stage 2. That meant no dining indoors at restaurants, and gyms closed. I don't go to gyms, but I know it's a part of a lot of people's lives. Extending ActiveTO was the right move, and I'm looking forward to hearing what's in store for the colder months so that I can stay active. I've lost count of how many times I've ridden along the Activeway, and update my long Twitter thread about it each time.

The course about me started a couple of weeks ago, and it's going well so far, having made my way through 4 workbooks already. I've registered for an American literature course in March, which optimistically states is an in-person course. I anticipate it being held online, even if a vaccine is available.

Throughout the pandemic, I've never felt days melting into others like some people have felt. It doesn't feel like an extension of March, and having a job there I work Monday to Friday, weekends felt like weekends. At work, I would grace Wednesdays with the "What a week, huh?" meme from 30 Rock, but I had to suspend the running joke because it had lost its pre-pandemic edge. I hope to return to days where I can drop the image macro in chat again.

17 Oct 02:06

Rocketman’s web tool could help TTC riders avoid crowded buses

by Jonathan Lamont
Rocketman's 'transitcrowds.com' website

As the second wave of COVID-19 sweeps through Toronto, TTC riders have taken to social media complain about the lack of social distancing on the transit organization’s vehicles. Instead of working to rectify the problem by increasing the number of vehicles to help spread out crowds, the agency simply tweeted that social distancing was “no longer possible.”

Understandably, the TTC’s response sparked considerable ire from passengers. Many TTC riders find themselves choosing between risking their safety by getting onto overcrowded vehicles or waiting for less crowded vehicles to come along. It’s not just riders, as the TTC workers union, ATU 113 Toronto, has called on the agency to limit the number of passengers on buses and bring back laid-off workers.

Although the TTC’s response to the second wave is unacceptable and the agency needs to do more to protect the safety and health of passengers, one app is offering at least a partial solution. RBC Ventures’ Rocketman transit app chimed in on the TTC Twitter feud to let riders know about its ‘transitcrowds.com‘ website.

Rocketman’s Twitter account said the team has been working hard on the tool, which can predict how crowded routes will be ahead of time. According to the website’s ‘About‘ section, Rocketman uses two types of data to make these predictions. First, it uses historical Automatic Passenger Counter (APC) data provided by transit agencies to provide graphs that show when different routes are typically most busy. That information can be viewed in graphs displayed on the site. Rocketman also sources live reports from user-generated information that originates in the Rocketman App.

The website uses this data to categorize different routes as ‘Low,’ ‘High’ or ‘Some’ Risk. Routes with a Low Risk of Crowds will have between 0 and 12.5 percent of average capacity, some risk between 12.5 and 30 percent and High Risk will be 30 percent and up.

Transitcrowds.com could be a helpful tool for TTC passengers that lets them better avoid crowded vehicles, either by riding at alternate, less busy times or by finding less busy routes.

Rocketman’s web tool isn’t the best solution — the TTC needs to take action if things are to improve — but transitcrowds.com could help some people avoid the worst of crowding on the TTC.

Source: Rocketman (Twitter) Via: BlogTO

The post Rocketman’s web tool could help TTC riders avoid crowded buses appeared first on MobileSyrup.

17 Oct 02:05

Fiat Chrysler to manufacture EV or hybrid vehicle in Windsor, Ontario

by Brad Bennett

Following the deal between Ford, multiple government levels, and the Unifor autoworkers union, Fiat Chrysler will set up an EV plant in Ontario.

According to a report from Electric Autonomy Canada, the company’s Windsor, Ontario Assembly plant will be retrofitted as part of a $1.3 to $1.5 billion investment

Over the next three years, Fiat Chrysler will retool the Windsor plant to get it ready for EV manufacturing, resulting in as many as 2,000 new jobs, according to the automaker. Currently, the plant has 4,600 employees, and Fiat Chrysler employs 9,000 people in Ontario.

The Windsor plant will manufacture the Chrysler Pacifica and Voyager minivans and step up to build a new plug-in hybrid or an EV in 2024.

Unifor is also working with Ford to bring five new electric vehicles to the company’s plant in Oakville, Ontario.

Source: Electric Autonomy Canada

The post Fiat Chrysler to manufacture EV or hybrid vehicle in Windsor, Ontario appeared first on MobileSyrup.

17 Oct 02:03

Three Simple Changes in Road Design & Speed Can Save Pedestrian Lives

by Sandy James Planner

Here’s more data showing  that simple changes to speed and design of city roads can make all the difference in reducing pedestrian and cyclist fatalities and serious injury.

Planner Eric Doherty posted this article from the Insurance Institute for Highway Safety that shows that ‘centreline hardening’ using rubber curbs and bollards at intersections to force drivers to slow down and proceed carefully through intersections  reduces left-turn speeds and increases safety for pedestrians in the intersection.

In the United States pedestrian fatalities have risen 53 percent from 2009 to 2018 and are 17% of all traffic deaths. As over half of Vancouver’s fatalities are with turning movements in intersections, tightening the corner for drivers to proceed slowly would also be safer for pedestrians.

Seattle’s Transportation Engineering champion Dongho Chang has reported out on the implementation of leading pedestrian intervals at forty locations in Seattle.

I have written about Leading Pedestrian Intervals that give pedestrians an advanced green crossing time ahead of car traffic, enabling a pedestrian to be well into the intersection before any driver turning movements through the same space.  The leading interval time is usually between six to eight seconds.  Over 2,200 of these devices  have been installed in New York City which has seen a 56 percent reduction in pedestrian and cyclist fatalities.

In one year Seattle has seen a 33 percent reduction in pedestrian collisions  with the installation of Leading Pedestrian Intervals compared with three years of previous data at the same intersections.

The cost of Leading Pedestrian Intervals or LPIs is minimal, at most a few thousand dollars per intersection. It saves lives.

Lastly Rod King with 20 is Plenty in the United Kingdom has been involved in a survey with the British Department of Transportation that show that 80 percent of drivers in downtown areas of city adhere to strict speed limit of 30 km/h  (approx 20 mp/h and accept that speed.  In residential areas outside the downtown  70 percent of motorists stay with that speed.

You can take a look at the data here that shows maintaining city speeds of 30 km/h resulted in an immediate reduction of pedestrian and cyclist fatalities by 30 percent.

Slower speeds, tighter driver turning movements and advanced green time for pedestrians saves lives. They are simple approaches with massive benefits, and can be achieved with little cost. So why are we not valuing human lives ahead of the right of vehicular drivers to travel quickly?

Slower speeds mean enhancing  healthier communities where people of all ages and abilities can use the street without fear of being crashed into, lessens pollution and enhances mental and physical health.  As Mr. King notes, enforcement is easy by simply installing cameras and following up diligently on drivers that choose to drive too fast.

You can take a look the YouTube video below about Leading Pedestrian Intervals, as well as another video with  Mr. King describing why 20 miles per hour is Plenty in Britain’s cities and areas.

 

 

 

 

 

17 Oct 02:02

Weeknotes: evernote-to-sqlite, Datasette Weekly, scrapers, csv-diff, sqlite-utils

This week I built evernote-to-sqlite (see Building an Evernote to SQLite exporter), launched the Datasette Weekly newsletter, worked on some scrapers and pushed out some small improvements to several other projects.

The Datasette Weekly newsletter

After procrastinating on it for several months I finally launched the new Datasette Weekly newsletter!

My plan is to put this out once a week with a combination of news from the Datasette/Dogsheep/sqlite-utils ecosystem of tools, plus tips and tricks for using them to solve data problems.

You can read the first edition here, which covers Datasette 0.50, git scraping, sqlite-utils extract and features datasette-graphql as the plugin of the week.

I'm using Substack because people I trust use it for their newsletters and I decided that picking an option and launching was more important than spending even more time procrastinating on picking the best possible newsletter platform. So far it seems fit for purpose, and it provides an export option should I decide to move to something else.

Writing scrapers with a Python+JavaScript hybrid

I've been writing some scraper code to help out with a student journalism project at Stanford. I ended up using Selenium Python running in a Jupyter Notebook.

Historically I've avoided Selenium due to how weird and complex it has been to use in the past. I've now completely changed my mind: these days it's a really solid option for browser automation driven by Python thanks to chromedriver and geckodriver, which I recently learned can be installed using Homebrow.

My preferred way of writing scrapers is to do most of the work in JavaScript. The combination of querySelector(), querySelectorAll(), fetch() and the new-to-me DOMParser class makes light work of extracting data from any shape of HTML, and browser DevTools mean that I can interactively build up scrapers by pasting code directly into the console.

My big break-through this week was figuring out how to write scrapers as a Python-JavaScript hybrid. The Selenium driver.execute_script() and driver.execute_async_script() (TIL) methods make it trivial to feed execute chunks of JavaScript from Python and get back the results.

This meant I could scrape pages one at time using JavaScript and save the results directly to SQLite via sqlite-utils. I could even run database queries on the Python side to skip items that had already been scraped.

csv-diff 1.0

I'm trying to get more of my tools past the 1.0 mark, mainly to indicate to potential users that I won't be breaking backwards compatibility without bumping them to 2.0.

I built csv-diff for my San Francisco Trees project last year. It produces human-readable diffs for CSV files.

The version 1.0 release notes are as follows:

  • New --show-unchanged option for outputting the unchanged values of rows that had at least one change. #9
  • Fix for bug with column names that contained a . character. #7
  • Fix for error when no --key provided - thanks, @MainHanzo. #3
  • CSV delimiter sniffer now ; delimited files. #6

sqlite-utils 2.22

sqlite-utils 2.22 adds some minor features - an --encoding option for processing TSV and CSV files in encodings other than UTF-8, and more support for loading SQLite extensions modules.

Full release notes:

  • New --encoding option for processing CSV and TSV files that use a non-utf-8 encoding, for both the insert and update commands. (#182)
  • The --load-extension option is now available to many more commands. (#137)
  • --load-extension=spatialite can be used to load SpatiaLite from common installation locations, if it is available. (#136)
  • Tests now also run against Python 3.9. (#184)
  • Passing pk=["id"] now has the same effect as passing pk="id". (#181)

Datasette

No new release yet, but I've landed some small new features to the main branch.

Inspired by the GitHub and WordPress APIs, Datasette's JSON API now supports Link: HTTP header pagination (#1014).

This is part of my ongoing effort to redesign the default JSON format ready for Datasette 1.0. I started a new plugin called datasette-json-preview to let me iterate on that format independent of Datasette itself.

Jacob Fenton suggested an "Edit SQL" button on canned queries. That's a great idea, so I built it - this issue comment links to some demos, e.g. this one here.

I added an "x" button for clearing filters to the table page (#1016) demonstrated by this GIF:

Animation demonstrating the new x button next to filters

TIL this week

Releases this week