Shared posts

15 May 03:26

The MacBook Pro w Touch Bar is Great for this Knowledge Professional

by janet tokerud

macbookprowithtouchbar

Macs are still Relevant to the Knowledge Professional

There are lots of reasons why a knowledge professional would like this Mac. First, it is a Mac which is historically the favorite for creatives. I’m going to assume that knowledge professionals must be creative to do their jobs regardless of the field. If you are operating with knowledge at a professional level and might be a knowledge enthusiast too, this is your brand.

But this gorgeous MacBook Pro is Pricey!

Let’s get this out of the way early. Yes. Apple raised their prices on these better quality Macs. You get your money’s worth, but yeah, it will cost you. As my father used to say, money doesn’t grow on trees. And the last 30-years haven’t been kind to the middle class. But your tools have never been like this.  You are better educated and more knowledge-capable than ever. If you are serious about your career, this big tax deduction is a great investment. It is your primary tool so consider cutting back elsewhere in your budget.

iPad is Not (Yet) Enough. Now that you are preferring Mac and are creating with knowledge whether writing, advising, creating presentations or diagramming, you’ll probably want a Mac not just an iPad to get your work done. I say you need both if you can do it because reading, the primary activity of a knowledge professional, is best on an iPad by a large margin. It feels better. It’s less awkward.

Mac wins when you are writing. If you are reading and typing, you’ll probably be better on a Mac than an iPad just due to the advantage extra MacBook functionality. With a Mac you can still read but you get some extras. The screen is adjustable to any angle and is always with you (you don’t need a stand or foldable cover to prop it up). The keyboard is really good and is built in. It works well on a table or your lap or anything in between. While Apple doesn’t give you a touch screen, it gives you the world’s best trackpad which you can operate and still keep your hands in perfect typing position.

Mac wins when you are gathering and organizing as you begin or review a project. The Finder with its multiple windows and ease of connecting and working with external data, beats the iPad handily in these critical knowledge functions. There are many Mac apps to choose from. I like Scapple for the visual note-taking. I like Ulysses because it keeps my serious writing and important notes in a database so all that stuff is there so I can grab what I need quickly. I think Scrivener is great for long writing projects. But also we have lovely OmniOutliner and Omnigraffle.

Why Would the MacBook Pro with Touch Bar be your choice for your next portable Mac?

Look and Feel

Jony Ive outdid himself with this Mac. I’ve got the space gray model which is stunning. I believe and its been documented that beauty can inspire and energize. What knowledge professional doesn’t need all the inspiration and energy they can get when faced with tough creative challenges? It is smooth as silk opening, closing  and adjusting the screen angle.  The screen is wide color and brilliantly lit. The touch bar is perfectly blended with the keyboard. The 4 Thunderbolt ports are small leaving the sides uncrowded.

Form Factor

I’ve been using the MacBook 12 and I have to say that the small size and 2 lb. weight are wonderful on your lap or in your bag. It’s better for me to have the extra screen size, though. I like big screens because I get a larger canvas just like I like bigger notebooks and note pads when I need to figure out what I am doing on a new problem or project. For me, the 13″ size is as big as I’m willing to carry or hold. I would not have gone 13″ if the MBPTB was bigger or heavier than its compact 3 lbs. I like that it is smaller than the MacBook Air 13 by a large margin and, of course, way cooler. It’s more comfortable on my lap.

The Amazing Touch Bar

I’m typing on it now. I have an iPad Pro 12.7 next to me but it is sitting idle for the reasons I’ve just covered. The Touch ID on the right edge of the touch bar is the biggest favorite out of the box. It’s not to be typing to login anymore. I haven’t learned much about the new touchbar yet, but I like that Apple has given me a small strip of iOS in this new MacBook Pro. That strip is great for discovering the commands available at any given moment. Simply engaging touch in the middle of a writing project may help your brain get a little more creative. I’m thinking it engages with the right brain, subconscious, primitive capacities of the mind where breakthroughs and insights are facilitated.

 Largest Trackpad

With the exception of the new MBP15 trackpad, this is the biggest trackpad around. Apple already makes by far the best trackpads. Turns out as you get into gestures including drag, pinch and zoom, the bigger the better. If palm rejection wasn’t excellent, the size of the trackpad would be an issue. Not a problem that I’ve experienced.

Fastest Solid State Drive

Apple says this solid state drive is twice as fast as previous SSD generations. This affects how fast things start – you are doing one thing and you do another thing. It’s extremely responsive. There’s no resistance to your effort. Faster is better when you work on hard stuff.

The USB-C Thunderbolt Ports

I’m not the best one to judge the downsides of not having the ports that came on previous MacBook Pros. I already have been using a single USB-C port with my MacBook 12 for over a year and a half and 4 ports seem like a complete luxury even though USB-C. They aren’t as compatible so I’ve used Apple 3 port adapters when I wanted power and a backup. With four ports, I don’t need a hub or multiple function adapters. I plug in power in one port and a hard drive and a USB stick all at the same time. I’ve got a 64 gig USB-stick that has a regular USB3 side and a USB-C side so that’s already available and I recommend one of these if you are new to USB-C.

My 2013 MacBook Pro 13 had an SD card slot, 2 USB3 ports and a Thunderbolt 2 port.  Now I get 2 full speed Thunderbolt 3 ports and 2 full speed second generation USB-C ports. I just bought 2 new $89 1-terabyte 7200 rpm G-drive portable drives in space gray from the Apple Store. I can’t find that $89 price anywhere else so I think Apple is helping out on that price a bit to make buying new USB-C drives palatable.

I am a technology enthusiast, so all these really powerful and fast, state of the art ports seem good. The lack of legacy compatibility and requirements to sometimes use an adapter for old drives seems like a very reasonable price to pay considering how small and light the MBPTB is and what I get. You can get a USB-C to USB3 adapter for about $10. Apple has their USB-C to USB3 adapter for sale for $9. Apple is trying to make these adapters affordable to smooth your way into the future.

You can keep all of your dongles at home except maybe take that USB stick with USB-C and USB-3. You already had a USB stick with you anyway didn’t you?

 

15 May 03:26

MacBook Air vs. MBP13 or wait/hope for MacBook 12?

by janet tokerud

I want to talk about the MacBook 2016 and MacBook Pro 2016s that I purchased for a minute and then onto what’s next for me. I bought my rose gold MacBook 2016 in the highest configuration, the Macbook Pro with Touch Bar came out and that led to a 2nd purchase. Other than desktop Macs that I replaced last year, these are still my main Macs. I don’t use desktops much.

I’m typing on my Rose Gold MacBook 12 which I prefer over my faster but heavier MBP with Touch Bar. So, after 3 years, I’m kind of in the mood for a new laptop and have that same dilemma except a bit worse right now. I can’t believe there is no MacBook 12 for 2018 or 2019. I have a VERY hard time believing this is a discontinued model. No way!

Rumors have been going around for a long time that the next MacBook 12 will be driven by one of Apple’s A chips, not an intel chip. The industry this year is coming to the conclusion that Apple needs to do this platform switch. Also, we have incursions from iOS 13 with Catalyst coming to help migrate iOS apps to the Mac. If iOS app migration and A chips continue to close in on Macs and MacOS, the MacBook may get a double whammy soon!

Software development is slow compared to hardware except that firmware is essentially software burned into silicon so it’s still trouble. There’s no way Apple can turn its back on Mac miniaturization. I know some may think Macs are yesterday’s news but some of the best things and all the big complex apps have been on PCs and Macs not on smartphones and iPads.

iOS apps are getting better as that is where the volume is. As in 20 smartphones vs 1 Mac and maybe 3 PCs. This is guesswork with these numbers, but in any case we have developers working on stuff that fits on tiny devices that never grows up to run on Mac apps.

The Mac and Windows apps aren’t getting upgraded fully if at all. FileMaker Go is practically a view only device compared to FileMaker Pro Advanced which can build incredibly powerful and complex databases and automate just about anything at scale. Part of the problem is that iOS is a touch or fat finger environment that can’t handle as much detail/complexity as a Mac or say Windows 10.

One more time on Desktop Software. It’s not going away. It is being invested in and can run on things like a sexy little MacBook 12. In MacOS Catalina come Fall, we get to plug an iPad into our Macs and use Apple Pencils on the Mac software that supports it. Macs will get a little more fun in their lives and who knows what happens with Apple’s A processors cranking out augmented reality on these Macs. Interesting that we are actually plugging the iPads in with wires, I guess the wires will allow better performance. Which might be understandable if we throw in a big dose of augmented reality. I hope someone finds an application for AR quickly for something fun like maybe genealogy family trees – kind of like time travel if the software can figure out how to visualize that.

So, temporarily I’m stuck with the NEW COOL MacBook Air 2019. OK, 12 ounces heavier isn’t working for me but I do admire the 13 inches of screen real estate over the 12 — some stuff is designed for a minimum of 13″. Also, I very much appreciate the 2 Thunderbolt ports vs. 1 USB-C on the MB12. I get 4 on a MacBook Pro — 2 on both sides which really helps when you need a plug and power is on your right side.

The new Air 13 seems to get a thumbs down from reviewers based on performance. Just like the MacBook 12’s have done. The 13 has a girly 8th gen Y Intel processor (a lousy 2 cores) vs. the beefy hotter 9th gen 4-core processor in the new MBPs. All I can say to Apple is you could have done a little more for the Air. Like offer an option but I suppose that wedge and that 1/4 lb. lighter Mac couldn’t handle these beefier hotter chips. I think the marketing guys were trying to drive a wedge between the two models. Whatever.

Even though 1/4 lb. is almost nothing, I seem drawn to the Air 13. It is a vote for smaller lighter. I would get the next MB12 if there were any available other than a 2017 (that’s too historic and only 1 year ahead of what I have now!). Also, cost is an issue with the MBP. It’s too easy to end up spending close to $3000 with one of those if you, like me, want to have 16gb RAM and a terabyte SSD for an extra $200. The Air I want probably will cost me closer to $2000, but once you open the floodgates of processor choices on the MBP13, you cross the $2500 line in a heart beat.

Anyway, if you are a woman like me who is not crazy about even an extra 1/4 lb. or like little devices like I do and really aren’t doing lots of intensive gaming or video processing or something, I say go with the Air 13. OK. One last admission, I have an iMac Pro on my desk should some crazy reason for more power come up at some point. I pretty much hate working at my desk, so most work is done like this on the couch with my MB12.

There is an outside chance a MacBook 12 will show up if Apple doesn’t like its sales of Macs in September but I doubt it. I’m still a little on the fence about buying before I can get a 2019 or 2020 MacBook 12 but as soon as I make a move, you’ll be the first to know!

15 May 03:25

Explore Explain is a new visualization podcast about how the charts get made

by Nathan Yau

From Andy Kirk, there’s a new visualization podcast in town:

Explore Explain is a new data visualisation podcast and video series. Each episode is based on a conversation with visualisation designers to explore the design story behind a single visualisation, or series of related works. The conversations provide an opportunity to explain their design process and to share insight on the myriad little decisions that underpin the finished works. It also shines a light on the contextual circumstances that shaped their thinking.

Audiences will gain an appreciation of the what, the why and the how, learning about the hidden problems and challenges, the breakthroughs and the eureka moments, the pressures and frustrations, the things that were done and the things that were not done, as well as the successes and the failures.

My main podcast-listening mode was while driving, so I’m way behind, but this sounds promising. It’s right in line with Kirk’s Little of Visualization Design blog project.

Tags: Andy Kirk, Explore Explain, podcast

15 May 03:25

"Don't shout. Don't whisper."

by peter@rukavina.net (Peter Rukavina)

Rules for using the telephone from The Island Telephone Company. We need the equivalent for Zoom.

15 May 03:25

Creating Art with Time

Hey! It’s Fabian. I’m a developer at Datawrapper. This week I got inspired to make art.

Since the beginning of the month, Nadieh Bremer started to post amazing drawings that she created with a plotter. Nadieh is a Dutch data visualization designer and astronomer who creates awesome visualizations (check out her portfolio). When seeing her tweets I thought: “This can be done with Datawrapper!”

I researched a little and found out that Nadieh was plotting curves created by a harmonograph. A harmonograph is a mechanical device (video) that uses pendulums to create geometric images – but it doesn’t have to be mechanical. Thanks to Wikipedia, I found formulas to calculate the path of a harmonograph and implemented a simulation. What you can see here are some paths a pendulum can take over time, visualized with Datawrapper:

What you see here are six individual Datawrapper charts. Some weeks ago David showed us how Datawrapper can be used for small multiples, and this method is perfect for my little art display.

If you are wondering what chart type I used to create these lines of harmony, you guessed right: They are Line Charts Scatterplots. My favorite and the most versatile visualization type. The lines are created with the Custom lines and areas feature.

How it’s made

It took me a minute to visualize and publish every harmonograph in Datawrapper – the more complicated part was getting the data, exploring, and finding nice shapes. I used Observable to prototype a harmonograph simulation in Javascript based on formulas from Wikipedia.

Check out my Observable Notebook and play around with the sliders to create your own harmonograph! Below all the sliders you find the data to import into Datawrappers custom lines and areas feature. After importing to Datawrapper you only need to style your line a little. I used the following attributes:

@color:transparent @stroke:#663366 @opacity:0.8 @width:0.5

Feel free to create one for yourself and show it to me on Twitter (@sto3psl)!


That’s it for this week. I hope you enjoyed reading how math and physics can create pretty shapes, and that it sparked some interest to use Datawrapper for art. If you want to start, hover over the charts above and click on “Edit this chart” to get right into the chart creation process. We’ll see you next week!

15 May 03:25

Face Off

by Richard Woodall

As with most everything else in our lives, the past few months have utterly transformed the symbolic significance and economic status of masks. For decades, many nations (including Germany, France, Austria, Belgium, Ukraine, Russia, Canada, and the U.S.) and municipalities had been attempting to enforce laws against face covering in public spaces, motivated generally by institutionalized Islamophobia and the imperative to securitize urban environments. As recently as last October, Hong Kong sought to outlaw public masking in an effort to clamp down on protests. Now, in a matter of weeks, such policies have been reversed. Authorities are increasingly mandating the public use of masks, scrambling the social semiotics of face covering. Once, to walk the streets with a hidden face was to make oneself an outlaw; now it is becoming a sign of civic responsibility (if not cosmopolitan elitism).

Unequally distributed consequences of mask wearing demonstrate the shallowness of “all in this together” tropes

The connotation of masks is shifting, but the rate and extent has been uneven both within and between countries. East Asian nations, marked by recent experience of SARS and H1N1, have typically been quick to adopt mass mask usage. In most of Europe and North America, with no equivalent memories to draw on, the situation is more conflicted. People with East Asian heritage, who have tended to be early adopters in the U.S. and the U.K., have been the target of racist attacks, a trend aggravated by the Sinophobic rhetoric of some politicians. Meanwhile, for certain demographics — black men, for example — the risks of appearing masked in public can be as grave as catching the virus itself. The unequally distributed consequences of mask wearing are another demonstration of the shallowness of the “all in this together” tropes that have captioned government responses to the crisis.

After initial confusion, a scientific consensus appears to have developed around the recommendation that healthy people wear masks in public — not because it meaningfully reduces their chances of contracting Covid-19 but because (alongside established hygiene and distancing measures) it mitigates the danger of asymptomatic carriers transmitting it to others. This pandemic has exposed us all to a strange, inverted form of paranoia, whereby our every action becomes a possible source of contagion and a prospective vector for another’s suffering. Covering my face seemingly allows me to opt out of this global cycle of misery, dispossession, and death by partitioning my body from the pattern of viral circulation — a case of clean hands allowing for a clear conscience.

However, the question of who gets to wear a mask, and of what kind, is far from an issue of simple personal responsibility. The sudden visibility (or visible absence) of masks on the faces of the people around us reveals our common implication in the transnational political economy of protective equipment, which is currently working to distribute the harms of this crisis according to the prerogatives of class power and the profit motive.


Media coverage of the mask issue has been haunted by a recurring set of questions. If “the science” increasingly supports masking as potentially mitigating the spread of Covid-19, why are there still so many contradictory policies about them? Why hasn’t there been a consistent message across the board, backed up by the necessary distribution of resources? The fact is, “the science” is only one factor bearing on state decision-making, and a relatively weak one at that. The stances that governments take on masks have as much to do with their increasing economic and symbolic value as their medical benefits.

The saga around Trump and Pence’s refusal to mask up is one particularly obvious example. But more broadly, the pandemic has transformed masks from just another cheap, low-margin bulk good shipped around the world by the ton to a desperately prized resource in a savagely competitive market.

Before the outbreak began, China was the world’s primary medical mask manufacturer, producing half the global yearly stock. But this didn’t mean the country was ready to equip its own population in the event of a crisis. When Covid-19 hit, the Chinese government was forced to restrict exports and even had to import protective equipment from Japan, India, the U.S., and Tanzania. This rapidly drained supply from the global market, hampering the efforts of soon-to-be affected countries to build their own stockpiles.

China is once more in a position to export (in part because it has repurposed factories that had been making toys, clothes, or electronics), but in the interim, the global mask market has completely changed. Buyers must now navigate a volatile landscape in which nation-states, medical-supply companies, and independent brokers compete over a limited stock, all the while contending with surging demand, distribution bottlenecks, and widespread quality-control issues. The bidding has become increasingly acrimonious, with governments prepared to go to remarkable lengths to secure their own supply. In March, Germany impounded a shipment of masks headed for Switzerland; a month later, officials in Berlin accused the U.S. of “piracy” when it allegedly swiped 200,000 masks earmarked for the municipal police department. Private buyers have repeatedly outbid state procurers. The New York Times, for instance, has reported on a group of French officials who were supposedly “outbid at the last minute by unknown American buyers for a stock of masks on the tarmac of a Chinese airport.”

The question of who gets to wear a mask, and what kind, is far from an issue of simple personal responsibility

Under these circumstances, any governmental prevarication can be catastrophic. Both the U.S. and the U.K. have failed miserably at providing health-care staff and other at-risk workers with adequate protective equipment. There have been widespread reports of medical staff treating patients while wearing substandard or improvised gear: nurses wearing trash bags, doctors performing tracheotomies in ski goggles. These procurement failures have undoubtedly impacted policy announcements, with the U.K. government’s reluctance to issue more concrete advice on mask wearing likely motivated in part by fears that citizens would buy up supplies needed for health workers.

The U.S.’s domestic medical supplies market has become a bewildering microcosm of the global mask economy: Hospital buyers are obliged to compete with other health-care providers, federal officials, and medical-supply companies, frequently attempting to source stock direct from China or through unknown and uncredentialed “entrepreneurs.” There are reports of shipments of masks being flipped multiple times by brokers before arriving at the point of use — if they arrive at all. So dire is the situation that on April 1, the Wall Street Journal felt obliged to explain to its readers “Why the Richest Country on Earth Can’t Get You a Face Mask.”

In a sense, the attitude of spoiled incredulity in that headline cuts to the heart of the matter. Until now, it might have seemed that the elaborate tangle of capital’s supply chains was organized precisely to service the desires of the American consumer, but the underlying logic of this system is to maximize profit, pure and simple.

The inability of global supply chains to deliver masks to those who need them the most marks not their failure but proof of their continuing functionality. By concentrating production in the areas with the cheapest labor, mandating lean inventories and “just in time” logistics, and insisting that, even during a crisis, distribution is channeled primarily through markets, the infrastructure of global capitalism is configured to protect profit-seeking above all other considerations. As Ingrid Burrington wrote for Data and Society, “the decisions that tend to make supply chains fragile … are decisions that tend to be rewarded by markets and governments … You’ve destroyed all of your competitors and now we can only get testing swabs from one part of Italy? That’s not a weak supply chain, it’s proof that the market rewards the best products.”

For national governments desperate to retain authority and legitimacy in the face of this crisis, masks have become a powerful symbol of state efficacy. Just as mask wearing is becoming a sign of individual civic responsibility, so has consistent supply of medical masks and other personal protective equipment become an index of a state’s ability to protect its citizens. The efforts of governments to cover up their failings indicate how seriously they are taking the optics of this issue. In February, Western media was horrified by China’s attempts to censor whistleblowing doctors; now in the U.K., National Health Service management are threatening to discipline staff who speak out about working conditions, while ministers bluster that the country is now the “international buyer of choice for personal protective equipment.”

Under these circumstances, masks have acquired special geopolitical importance. Western political commentators have coined the term “mask diplomacy” to describe the efforts of certain nations (i.e., China) to build global hegemony through strategic displays of beneficence. E.U. foreign policy chief Josep Borrell has spoken of a “global battle of narratives” in which “China is aggressively pushing the message that, unlike the U.S., it is a responsible and reliable partner.” The U.S., meanwhile, has helpfully demonstrated what the opposite of “responsible” and “reliable” looks like. Trump’s attempts at mask diplomacy have included an increasingly impotent stream of anti-China rhetoric, squabbling with American medical supplies producer 3M over mask exports, and defunding the WHO, one of the few multilateral institutions that could, at least theoretically, supervise the distribution of masks and other supplies on the basis of need.

How many people will die because of the deranged international political economy of masks? On April 17, the WHO warned that Africa could become the next epicenter of the pandemic. How will the nations of the Global South fare in the increasingly brutal world market for medical supplies? The WHO, the UN, and the Africa Centres for Disease Control and Prevention are beginning to disseminate masks, gowns, and other necessities to under-resourced health-care systems across the continent, but African nations may still be forced to rely on scraps from the table of China, Europe, and the U.S. The inability of global capitalism to support the equitable distribution of basic, universal human needs will only become more transparent in the coming months.

The current international mask market is, to borrow a term from Achille Mbembe, a global “necropolitics”: a world system for the unequal allocation of risk, harm, and death. This encapsulates everything from the U.S.-China trade war to equipment shortages at our local hospitals to ourselves, eyeing up that 10-pack of disposable surgical masks on the Amazon marketplace, finger hovering over the “Buy Now” button. At every point in this chain there is a structural tension between the necessity of finding collective means to mitigate risk and provide care for everyone, and a politico-economic model which ceaselessly channels us toward individualized, transactional responses designed to distance each of us from the suffering of others.

In other words, if the question “should I wear a mask?” feels fraught with implications of craven self-preservation and the unjust distribution of affliction, this is the structural effect of factors that extend far beyond the scope of personal choice and responsibility.


In an article for the Guardian, Samanth Subramanian writes of the “various tribes of mask-wearers”: shoppers wearing medical-grade N95 masks in the pasta aisle; a “finance bro” jogging in a designer “Vogmask”; “the extreme worrier in the GVS Elipse P100.” The capacity to express one’s response to the crisis in terms of boutique consumer choices is, however, available only to certain social strata. While the affluently anxious mull over the ethics of buying airtight respirators for themselves and their family, delivery drivers and grocery clerks are lucky if they get a disposable plastic mask. Others make do with improvised homemade coverings.

This is how global politico-economic dynamics, mediated by national and regional formations of race and class, are currently playing out — on the streets, in shops and depots, and over our faces. Covid-19 presents a fundamentally collective hazard, against which we are only as safe as our most vulnerable neighbor. Structures of geopolitical and economic competition have thus far prevented us from developing an appropriately cooperative response at the international level; at the local scale, this results in a situation where the personal management of individuated risk starts to seem like the only available recourse.

How many people will die because of the deranged international political economy of masks?

It does not have to be this way, however. One of the most emphatic effects of the pandemic has been to illustrate the condition of mutual exposure and vulnerability imposed on us all by the vicissitudes of globalized society. Masking — both the practice of wearing and the struggle to obtain them — knits us more tightly to this reality, implicating us directly in the economic and geopolitical processes currently driving the crisis response. What forms of political subjectivity might this moment disclose? What forms of solidarity can sustain local and transnational systems of care, while resisting capture by nations and markets? What coalition can be built of the sickly, the frightened, the outraged, and the faceless?

All over the world, workers are fighting for safer working conditions, proper protective equipment and better pay. Amazon, for instance, rakes in record profits and grabs market share while firing and attempting to smear warehouse worker Christian Smalls for speaking out about unsafe working conditions. As with all forms of disaster capitalism, this is an opportunistic extension of business as usual, stealing a march in the struggle between workers and bosses, collective action and self-interested greed. Yet it is profoundly encouraging that even among the atomizing conditions of lockdown, the actions of Amazon and other exploitive corporations do not go unresisted.

Masks are playing an increasingly symbolic role in these struggles. In the U.K., the state’s failure to provide NHS staff with the equipment necessary to work without putting their lives at undue risk (at least 100 health-care workers have now died from Covid-19) has become a focal point for outrage at the government’s handling of the crisis. The acronyms PPE (personal protective equipment) FFP3 (the U.K. designation for the airtight masks required by medics) are now shorthand for the incompetence and venality of Boris Johnson’s administration. Meanwhile, in the U.S., May Day saw a wave of coordinated strikes by workers at Amazon, Whole Foods, Instacart, Target, and Shipt, all demanding adequate protective equipment, improved safety standards, and proper hazard and sick pay. For these workers — generally among the most precarious and poorly remunerated, now suddenly understood to be “essential”— the mask is both an emblem of their solidarity and part of the physical substance of their demands.

In this context, masking begins to demonstrate how “personal protection” can be conceived of as a genuinely collective project. When a group of warehouse workers form a picket line, sporting a motely array of careworn bandanas, sliced-up pillowcases and mutilated T-shirts, it is evidence that, refusing to be crushed by desperate circumstances, they have found the courage necessary to care for one another at a time when both their boss and their government have proved willing to leave them for dead. More than just an indictment of corporate brutality, this act of group masking affirms the ability of people to develop forms of cooperative security against the intertwined threats of Covid-19 and capitalist necropolitics. Here, the mask acquires a fresh double significance: a necessary condition of survival and the common basis on which vital structures of communal solidarity might be reaffirmed.

15 May 03:24

Uh Oh… Double CoVid Trouble

by Dave Pollard


author’s rough estimate of current and projected CoVid-19 infection rates, based on data listed above and in the article below; data for the “world” and for Brasil are interpolated from other sources using various assumptions, and are hence much less accurate than other data

A double dose of really bad news today (May 13, 2020) about what the future may hold for the pandemic.

The first news was a set of fairly well-substantiated research data from New York, France and Spain that all came to the same conclusion: The rate of infection so far is a lot lower than we’d thought, and the mortality rate is thus a lot higher than we’d thought.

For New York State, which has a population of about 20 million, the first results of serology (blood testing for CoVid-19 antibodies) suggests that only 15% of the state’s people (about 3 million people) have been infected so far. Official reports are that about 20,000 New Yorkers have died, and a report today in the NYT suggests again that “excess deaths” in New York and in many other states to date are likely in the range of 50% more than the official reports (for understandable reasons I’ve discussed in earlier articles). 30,000 deaths divided by 3 million people = a 1.0% mortality rate, which is 2-5x what most of us were expecting based on the low mortality figures reported to date.

Results for Indiana support this unexpectedly low infection rate: there they have only had 3% of serology tests come back positive, although they’re earlier in the curve than New York State. Likewise it has been estimated that Québec’s infection rate is only about 5%.

For France, which has a population of 60 million and the same death toll as New York State, their serology tests have come back only 5% positive (meaning about 3 million people have been infected in France if the sample is representative). Their official death toll of 20,000 has likewise been shown to be only 2/3 of the “excess death” numbers there for the year to date, compared to recent years, so once again 30,000 deaths divided by 3 million people = 1.0% mortality rate.

The early numbers for Spain are very similar, and data for other countries don’t suggest these numbers are inaccurate.

What does this mean? If the first wave mortality rate had been, say, 0.25% (similar to the mortality rate for measles, as a lot of us were predicting), and the final actual first wave US death toll turns out to be 220,000 (again, assuming 50% more than UW projected “official” numbers) then that would mean 220,000/0.0025 = 88 million Americans would have been infected by this summer. That’s only 27% of the population, but it’s a significant proportion that hopefully would be immune to the next wave. So that’s 220,000 deaths, or 0.07% of the population dying while 27% of the population gets (we hope) immune from further infection.

But if the actual mortality rate is 1.0% (which now appears to be closer to the case) — 4 times higher — that means only 22 million Americans (just 7% of the population) will have developed any immunity by this summer that could protect them (and their contacts) in the face of a potential upsurge from relaxing social distancing too soon (which is already starting to happen), or from the next wave, while the other 93% of Americans now have a four times higher risk of dying from the next upsurge or wave of CoVid-19 than we thought was the case. And it leaves us miles short of the 60-70% minimum proportion with antibodies to the virus needed for so-called herd immunity, even in heavily-hit areas like New York.

The news gets worse. Just a few hours ago, the supreme court of Wisconsin struck down the Governor’s stay-at-home order extension in that state as being unconstitutional. That leaves that state with no legal regulations against pandemic spread whatsoever a week from now when the original period of the order expires, unless the recalcitrant Republican right-libertarian-dominated state house suddenly has a miraculous change of heart. This will almost certainly result in copycat constitutional challenges in every other state, that might spell the end to any systematic social distancing regulations in the US, which already has the world’s highest CoVid-19 death toll.

The disease is presenting us with several simultaneous paradoxes. Clearly, with only 4-7% Americans infected so far (barring more dangerous rulings like the Wisconsin one and commensurate irresponsible behaviour and death spikes), in many parts of the US many people don’t know a single person infected by the disease, and therefore can’t really believe it’s a serious problem. They will only believe their governments, and the stories they hear from hard-hit centres, for so long in the absence of any personal confirmation, before they will tend to fall victim to cynical manipulative politicians and/or conspiracy theorists. Many will become at least sloppy, if not outright defiant, about the kinds of rigorous social distancing that have so far kept the death toll so low.

The situation in other parts of the world is not that different. The infections tend to be extremely localized, principally in areas that have high population density (per household, not per square mile), high mobility, and low incomes requiring people to work long hours often in high-risk positions like private nursing home orderlies, billeted shift workers and slaughterhouse workers. They spread most easily among seniors and those with compromised immune systems, especially when those people are crowded together in sometimes-derelict institutions. For everyone else, the disease may be all but invisible. The nature of this disease is that social distancing seems to work incredibly well at limiting its spread, but in the absence of social distancing it spreads like, well, a plague.

We generally know little about these viruses, such as why they seem to have seasonal peaks and recurrences, and how exactly they make us sick. The probability of us developing a reliable vaccine in the near future is reportedly low, and there is a large enough population who will refuse the vaccine (10% is probably sufficient) that even if it is developed it may not be enough to defeat the disease from coming back again and again, until after our natural or vaccinated immunity has worn off, or the virus has mutated to a new form we are not immune to. For that reason some believe it will become endemic, which means that as many as 1% of the world’s population (78 million people) might eventually die from it, unless it evolves into a weaker form or an effective vaccine is developed. A worst case scenario, of course, but not a highly-improbable one. (This compares to the 3% mortality rate — 50 million people — of history’s most deadly pandemic in 1918. But that 3% was due to its mutation to a virulent second wave form that killed mostly the young and healthy.)

What we do know is that between 90% and 99% of citizens of most developed countries are still likely fully susceptible to CoVid-19 now, exactly when many of their governments, oblivious to the lessons of history, are beginning to ease restrictions that have kept the vast majority of their citizens from getting infected, and of whom about 1% will likely die when they do (that 1% being an average varying greatly depending on age, health and living conditions). That means a virtual certainty of an upswing in cases and deaths to as much as 20 times the current tolls.

We also know that additional waves are likely, and though they are likely to be weaker, there is a chance they may be stronger, or even much stronger. And they are likely to last anywhere from one to three more years. If you’re invested in the stock market, you may want to hope that the majority see no problem with tens of millions dying as the price of restoring economic ‘growth’. That is probably the cost. Whether we will choose to pay it remains to be seen. But the odds of us escaping relatively unscathed are suddenly looking much worse than they did even a few short days ago.

This is the sixth of a series of articles on CoVid-19. The fifth is here. The fourth, with a link to the first three, is here.

15 May 03:21

Not an expert

by Josh Bernoff

I know a lot about a lot of topics — I’ve spent hundreds of hours on these topics. And yet, I wouldn’t call myself an expert. And the reason for that is the reason you should be skeptical of experts you see online or on TV. Take artificial intelligence. I ghost wrote a book on … Continued

The post Not an expert appeared first on without bullshit.

15 May 03:20

Report claims secret Tesla battery will bring electric car cost down considerably

by Brad Bennett

A new report has come out stating that Elon Musk is planning on revealing a new battery technology that will help bring electric cars down in price, so they’re closer to the cost of a traditional gas-powered vehicle.

There’s little information to go on, but Musk has been teasing something like this for years, so it’s not entirely out of the blue.

The company is expected to reveal the new battery and maybe some other surprises during a ‘Battery Day’ near the end of May, according to Reuters. Following that, it plans to release this battery in the Model 3 in China before bringing the high-density technology to North America.

The publication says this new design and technology is built to last for millions of miles and should enable Tesla to lower the price of its cars so it can sell them at parity with gasoline cars.

The publication states that the new battery tech was developed with China’s Contemporary Amperex Technology Ltd (CATL) and Tesla in collaboration with a team of academic battery experts recruited by Musk. This could very well mean the Canadian section of battery researchers out of Dalhousie in Halifax, who have been pivotal in helping Tesla’s battery tech thus far.

Sources who spoke with Reuters claim that the batteries will use new technologies such as a cobalt-free or low-cobalt design, which in conjunction with many other technical processes will reduce the internal stress of all of the materials and allow them to store more energy over more extended periods. The lack of cobalt is significant since it’s one of the most expensive battery components used today.

There is also speculation that the company will open a massive Terafactory to produce these batteries that will be 30 times larger than its already massive Gigafactory in Nevada.

During an investors call in January, Musk said, “We’ve got to scale battery production to crazy levels that people cannot even fathom today.” Well, if these leaks are any indication, it looks like we’re going to see what he meant very soon.

Source: Reuters 

The post Report claims secret Tesla battery will bring electric car cost down considerably appeared first on MobileSyrup.

15 May 03:20

Coronavirus Visual Rundown, Part II (The Process 089)

by Nathan Yau

This issue of The Process is public.

Hi,

Nathan here. This is The Process, the weekly newsletter for FlowingData members where I look at charts closer.

This issue is public, but if you’d like to support FlowingData, becoming a member is the best way to do it. Membership keeps the blog running as a free resource for everyone and gets you bonus access to tutorials, courses, and more.

A new tutorial on animated charts went up yesterday. Maarten Lambrechts explains step-by-step, using the ggplot and gganimate packages in R.

This week, I follow up on the coronavirus visual rundown from two months ago, when there were only 1,190 confirmed cases in the United States. (omg.) A lot has happened since then.

Simulations and Pandemic Modeling

In mid-March the simulations and modeling ramped up. There was a lot we didn’t know about the virus (and still don’t), so there were a lot of what-ifs and simplified models of imaginary pandemics. We were learning about the now all too familiar social distancing and staying at home.

For The Washington Post, Harry Stevens used a simplified particle system to simulate social distancing and how it helped to flatten the curve:

It became the most read article of all time on The Post’s site.

Speaking of flattening the curve, this comic by Marcos Balfagón showing four medical workers pulling down the curve struck a chord with me:

Forecasts were all very abstract for a while with wide bands of uncertainty, but as we learn more, those bands seem to be converging more. These days, I find myself referring to FiveThirtyEight’s model tracker, which currently shows seven models for forecasted number of deaths.

FiveThirtyEight also illustrated why it’s so hard to make a good model for the current situation. There are just so many unknowns and variations that add more complexity to modeling. It’s hard to feel certain about how this all turns out.

So we come back to social distancing. The New York Times, using simulation data from the Kyoto Institute of Technology, visualized how droplets spread when you sneeze:

There were also the more literal simulations that showed how fake germs spread when you don’t wash your hands for the recommended 20 seconds.

Looking for a Baseline to Compare

Instead of looking to the future, another route has been to look to the past. If we can find something similar in history, maybe we can make better decisions for the future.

For National Geographic, Nina Strochlic and Riley D. Champine looked at the curves for the 1918 flu pandemic:

The points of interest are when social distancing started and began and how that relates to number of deaths. Cities eased up on social distancing, and it was not uncommon to see a second wave of deaths. To be honest, I’m more amazed that there is data for 1918.

Instead of comparing patterns over time, The Upshot compared infection and fatality rate for Covid-19 against other major killers, based on data from the Centers for Disease Control and Prevention:

At the time of publication in mid-March, there was a wide range of possibilities, so The Upshot used a pair of sliders to let you change what happened when you changed the rates.

More recently, we’ve seen the comparison of confirmed Covid-19 deaths against the seasonal average. Here’s a chart from The Washington Post and the Yale School of Public Health:

One of the challenges of making good estimates for coronavirus-related deaths is that the cause of death is sometimes not so straightforward. Some people might have pre-existing conditions. So some calculated excess deaths, which is total deaths minus Covid-19 and average deaths.

It could be a while, if ever, when we know the true numbers.

Shift in Daily Life

If you’re an essential worker, your life is probably more stressful than before. I hope you’re able to find some time to take a breath. Thank you for doing what you do.

More generally, priorities very obviously shifted during all of this. The shifts are visible from many angles.

There was the change in consumer spending, as shown by Lauren Leatherby and David Gelles for The New York Times:

There was a lot more cooking at home and less of everything else, especially retail shopping and travel.

Popular Google searches went from school and events to all things coronavirus. For The Washington Post, Alyssa Fowers showed the changes:

The Google News Initiative and Axios also showed the change in search for specific search topics:

Heck, I even made a toilet paper calculator:

Stepping outside our home bubble, air travel of course went way down, as shown by The Washington Post:

The restaurant industry also took a big hit, which you can easily see in OpenTable’s reservation dataset:

Ugh.

Major brands have also taken a hit. The Washington Post used a mall map metaphor to which ones are at risk:

Then with so many businesses, such as restaurants, shutting down, unemployment rates came at unprecedented numbers. The New York Times front page made that obvious:

When you can start to see changes in seismic activity, you know something is up:

Here’s to hoping the curves find their way down.

Getting Back to (a Semblance of) Normal

I think most of us are eager to get back to how the day-to-day used to be. Even as an introvert and someone who has worked from home for years, I’m itching to sit down at a restaurant or go to a store without worrying about having to sanitize everything touched by human hands.

We have to use caution though. We have to make sure the data points in the right direction before we start gathering again. For Reuters, Jon McClure called it breaking the wave:

Getting back to “normal” also depends on whether there’s a vaccine. For NYT Opinion, Stuart A. Thompson looked at the hopeful timelines against the typical ones for a vaccine to become available:

It might take longer than you think.

Marcel Salathé and Nicky Case made playable simulations, making an argument for contact tracing and staying at home:

Whether we’re ready to open or not, it looks like in some places things are starting up again.

For Bloomberg, Mira Rojanasakul and Allison McCartney show the change in average distance traveled at the state level, compared against a 6-week period before Covid-19:

For The New York Times, Gabriel J.X. Dance and Lazaro Gamio looked at similar trends through the lens of recent movements that show people leaving home:

This map stresses me out.

I hope these are the right decisions, but I guess we’ll see soon enough.

Stay safe.

-Nathan

Tags: coronavirus, roundup

15 May 03:14

Microsoft will stop offering 32-bit Windows 10 to manufacturers

by Jonathan Lamont
Microsoft logo

Microsoft will stop distributing 32-bit Windows 10 to manufacturers as it prepares to wind down support in favour of the newer 64-bit architecture.

Spotted by Neowin, a recent support document from Microsoft outlines the minimum hardware requirements for different versions of Windows 10.

“Beginning with Windows 10, version 2004, all new Windows 10 systems will be required to use 64-bit builds and Microsoft will no longer release 32-bit builds for OEM distribution,” the document reads.

Version 2004 refers to the upcoming May 2020 update for Windows 10, which should arrive any day now. Microsoft has already pushed it to manufacturers and developers, so it shouldn’t be much longer until the company starts rolling it out to the public.

However, Microsoft’s move away from 32-bit probably won’t result in much for users yet. For now, it means that Microsoft will no longer offer 32-bit Windows 10 to manufacturers for use in new PCs. If you really want 32-bit Windows, you will likely still be able to buy a retail copy for now. Further, if you’re using 32-bit Windows 10 right now, nothing will change for you yet.

Most people are probably using 64-bit Windows already

Microsoft says it will continue to provide feature and security updates for 32-bit Windows 10. While the company didn’t specify how long it would offer those features, you can probably expect support to continue for the foreseeable future.

All that said, AMD and Intel have offered 64-bit CPUs for more than a decade, so chances are you’re running 64-bit Windows unless you installed the OS on an older PC or purposefully installed 32-bit Windows 10 on a modern system.

If you’re not sure, grab your PC and tap the ‘Windows key’ > type ‘About’ > select ‘About your PC’ > look under ‘System type.’ That will list whether you’re using 64-bit or 32-bit Windows 10.

While you don’t have to make the switch to a 64-bit system right now, it’s something you may have to do in the near future. However, 64-bit comes with a lot of benefits. For one, 64-bit systems can access much more physical memory (i.e. RAM) than 32-bit systems. Most Windows 10 PCs have eight or 16GB of RAM, with some Enterprise versions of 64-bit Windows 10 supporting up to 6TB of RAM. 32-bit versions of Windows max out at just 4GB of RAM. 64-bit also offers better security and improved performance over 32-bit systems.

Ultimately, it’s about time for Microsoft to move on. 32-bit has largely run its course and, at least for the foreseeable future, 64-bit is the way to go.

Source: Microsoft Via: Neowin, PCGamer

The post Microsoft will stop offering 32-bit Windows 10 to manufacturers appeared first on MobileSyrup.

15 May 03:14

Facebook launches 50 person video chat Messenger Rooms

by Brad Bennett

Facebook has rolled out ‘Messenger Rooms’ that allow 50 people to video chat with each other at the same time.

This move comes as video chatting has become a crucial form of communication amid the COVID-19 pandemic.

Messenger Rooms are free and have no time limit. Combining this with the fact that you likely already chat with your friends via Facebook Messenger, this is likely to be stiff competition for Zoom, Google Meet and other video chat platforms.

Even if you’re not friends with someone on Facebook, you can still invite them to your Messenger Room with a simple link, much like other services.

You can also make Rooms public on Facebook so anyone can join or you can make them private and restricted to your friends.

To get the new feature, you need to download the latest Facebook Messenger update on either iOS, Android, the Microsoft Store or the Mac App Store. You may also begin to see ads for the feature popping up in your Facebook News Feed.

Once you have the update, head over the Facebook Messenger and open the ‘People’ tab along the bottom. Now, at the top, you should see a new button called ‘Create a Room.’

Once you have the room open, you can edit who can join and share the link.

Source: Facebook

The post Facebook launches 50 person video chat Messenger Rooms appeared first on MobileSyrup.

15 May 03:14

Teaching a New Course in the Fall

by Eugene Wallingford

I blogged weekly about the sudden switch to remote instruction starting in late March, but only for three weeks. I stopped mostly because my sense of disorientation had disappeared. Teaching class over Zoom started to feel more normal, and my students and I got back into the usual rhythm. A few struggled in ways that affected their learning and performance, and a smaller few thrived. My experience was mostly okay: some parts of my work suffered as I learned how to use tools effectively, but not having as many external restrictions on my schedule offset the negatives. Grades are in, summer break begins to begin, and at least some things are right with the world.

Fall offers something new for me to learn. My fall compilers course had a lower enrollment than usual and, given the university's current financial situation, I had to cancel it. This worked out fine for the department, though, as one of our adjunct instructors asked to take next year off in order to deal with changes in his professional and personal lives. So there was a professor in need of a course, and a course in need of a professor: Database Systems.

Databases is one of the few non-systems CS courses that I have never taught as a prof or as a grad student. It's an interesting course, mixing theory and design with a lot of practical skills that students and employers prize. In this regard, it's a lot of like our OO design and programming course in Java, only with a bit more visible theory. I'm psyched to give it a go. At the very least, I should be able to practice some of those marketable skills and learn some of the newer tools involved.

As with all new preps, this course has me looking for ideas. I'm aware of a few of the standard texts, though I am hoping to find a good open-source text online, or a set of online materials out of which to assemble the readings my students will need for the semester. I'm going to be looking scouting for all the other materials I need to teach the course as well, including examples, homework assignments, and projects. I tend to write a lot of my own stuff, but I also like to learn from good courses and good examples already out there. Not being a database specialist, I am keen to see what specialists think is important, beyond what we find in traditional textbooks.

Then there is the design of the course itself. Teaching a course I've never taught before means not having an old course design to fall back on. This means more work, of course, but is a big win for curious mind. Sometimes, it's fun to start from scratch. I have always found instructional design fascinating, much like any kind of design, and building a new course leaves open a lot of doors for me to learn and to practice some new skills.

COVID-19 is a big part of why I am teaching this course, but it is not done with us. We still do not know what fall semester will look like, other than to assume that it won't look like a normal semester. Will be on campus all semester, online all semester, or a mix of both? If we do hold instruction on campus, as most universities are hoping to do, social distancing requirements will require us to do some things differently, such as meeting students in shifts every other day. This uncertainty suggests that I should design a course that depends less on synchronous, twice-weekly, face-to-face direct instruction and more on ... what?

I have a lot to learn about teaching this way. My university is expanding its professional development offerings this summer and, in addition to diving deep into databases and SQL, I'll be learning some new ways to design a course. It's exciting but also means a bit more teaching prep than usual for my summer.

This is the first entirely new prep I've taught in a while. I think the most recent was the fall of 2009, when I taught Software Engineering for the first and only time. Looking back at the course website reminds me that I created this delightful logo for the course:

course logo for Software Engineering, created using YUML

So, off to work I go. I could sure use your help. Do you know of model database courses that I should know about? What database concepts and skills should CS graduates in 2021 know? What tools should they be able to use? What has changed in the world since I last took database courses that must be reflected in today's database course? Do you know of a good online textbook for the course, or a print book that my students would find useful and be willing to pay for?

If you have any ideas to share, feel free to email me or contact me on Twitter. If not for me, do it for my students!

15 May 03:14

Huawei Can’t Launch New Phones with Google Play Store until May 2021

by Asif Shaik
The US government barred Huawei and ZTE last year from doing any business with US-based firms. This was a massive blow to the companies as they were stopped from launching new phones with GMS (Google Mobile Services) and supply network equipment to US network carriers. Huawei had hoped that the ban would be lifted sometime this year, but that won’t happen until at least mid-2021. Continue reading →
15 May 03:14

Your Own Personal Enclave: The Smart Card Reader on the Librem 5

by Kyle Rankin

There are many unique features in the Librem 5 that make it stand out when compared to other smartphones. The easily-accessible hardware kill switches with lockdown mode, removable WiFi and cellular modules, and the fact it uses the same PureOS operating system as our laptops get the most attention. These are great examples of how Purism approaches innovation differently from most tech companies. We favor open standards and build solutions that put the user in control, not us. While that’s often meant we’ve had to avoid proprietary off-the-shelf solutions and do things ourselves, in other cases it’s meant using existing tried-and-true open technologies like OpenPGP smart card readers in a new way–as a secure enclave fully in the user’s control.

We recently got the smart card reader functioning on our Librem 5 Dogwood batch and I realized we haven’t talked much yet about the smart card reader. In this post I will discuss why we decided to add an OpenPGP smart card reader to our phones and how we intend to use it.

What’s Smart About Smart Cards?

Smart cards are ubiquitous technology. You can find them in everything from modern credit cards to USB security tokens like our Librem Key. The idea behind smart cards is to have a discrete and tamper-resistant chip that can store secrets securely and perform a set of cryptographic operations using those secrets without having to expose the secrets outside of the smart card. Data goes in, the smart card uses its secrets to manipulate that data, and modified data goes out.

In the case of an OpenPGP smart card, the secrets you store are private GPG subkeys like you might use to encrypt or decrypt email or files, or sign code you are contributing to. While you can store GPG keys on your local hard drive (and many people do), storing them on an OpenPGP smart card (such as on a USB security token like the Librem Key) adds a few extra layers of security.

The first layer of security a smart card adds is by preventing keys from being copied. Once a private key is copied onto a smart card, it can never be copied back out. If an attacker were to compromise your computer, they could make copies of your GPG private keys in your ~/.gnupg directory on their own computer and attempt to brute force the password. If they were able to guess the password you used to secure those keys, they could use the keys to sign and decrypt things on your behalf on their own computer. Yet if those keys were on a smart card and the smart card were inserted the most they could do is sign and encrypt/decrypt things on your behalf while they had access to the computer–and only if they could guess your PIN. They couldn’t make copies of the key and use it elsewhere and the moment you removed your key from the computer they would no longer have access.

The second layer of security a smart card adds is that it performs cryptographic operations with your keys on the smart card itself. Your private keys are never copied into system RAM even temporarily.

Finally, a smart card becomes an authentication factor referred to as “something you have” like a physical key or an identification card (a password is “something you know”). The portable nature of smart cards means that you can keep them with you at all times and since they can only be in one place at a time, they can prove that the user has possession of authentic secrets.

Why Put a Smart Card In a Smart Phone?

While desktop and laptop computers sometimes have smart card readers, and there are a number of specialized handheld devices used for point of sale and other functions that contain smart card readers, to our knowledge the Librem 5 is the only smartphone to have one. We recognized the value of smart cards for storing secrets securely when we developed our Librem Key for use with our laptops. Since a person’s phone arguably stores even more sensitive information than their computer, we felt it was important that it have some way to protect that information with strong secrets on a tamper-resistant chip.

Dogwood PCB CPU side down
Dogwood PCB CPU side down. The smart card reader is on the bottom left side of the main board next to the red wire.

Smart Card or Secure Enclave?

Phones need a way to protect against attackers who might root the device and extract cryptographic secrets from RAM and many phones today that solve that problem with a discrete chip known as a secure enclave. The chip behaves in many ways like a smart card in that it stores secrets and has a set of cryptographic operations it can perform but these chips are permanently soldered onto the PCB.

Along with any user secrets secure enclaves also tend to store certificates provided by the vendor. This is because in addition to handling your secrets securely, they also enforce the vendor’s secure boot and software verification system. By storing these certificates in the secure enclave they can make sure that you only run authorized firmware, software, and even can only attach authorized 3rd party hardware to the device. The reason you see secure enclave chips on modern smartphones has more to do with enforcing vendor control over the hardware and software and less to do with protecting your own secrets.

When we set about to solve the problem of secret storage on the Librem 5, we decided against a permanent secure enclave chip in favor of a smart card reader for a few reasons:

  • While there are exceptions, many secure enclave implementations are closed hardware that run proprietary firmware.
  • We believe users should own their own hardware so we weren’t interested in technology that risked removing control from the user.
  • Smart card readers are well-understood, ubiquitous and open technology with free software implementations.
  • Smart cards are removable and replaceable by the owner at any time and are relatively inexpensive to replace.

So you can think of a removable smart card like your own personal secure enclave fully under your control.

Librem 5 Smart Card Use Cases

Having an integrated smart card reader in the Librem 5 opens up all sorts of possibilities. Here are a few of the use cases we are already working toward with the Librem 5.

Secure GPG Key Storage

The obvious first use case for a smart card is to provide the owner a secure place to store their GPG keys. While you can already do this with some other smart cards it requires you to store keys on a separate USB security token with Near Field Communication (NFC) capabilities. On other smart phones whenever you want to perform GPG operations you have to take out your security token and hold it up to the phone. While that is a secure workflow, it’s also somewhat inconvenient–especially if you use GPG frequently. Our approach combines the convenience of storing your GPG keys on the local file system with the security of storing them on a smart card.

Simple Disk Unlocking

We intend for the Librem 5 to feature disk encryption for the root disk and at the moment all of the standard tools are there like you would have on any Linux computer. The one missing bit left to be developed is software we can fit inside the initrd file to allow the owner to enter their disk unlock passphrase on the touchscreen since unlike a normal Linux computer the Librem 5 doesn’t have a physical keyboard.

On our laptops we already have mechanisms within PureOS that allow you to unlock your disk with GPG keys on a Librem Key. Once configured, you get prompted for your GPG PIN at boot and the smart card decrypts a GPG-encrypted file that contains the LUKS disk unlock secret. After we complete the software to allow passphrase entry on the Librem 5 touchscreen at boot, it will be simple to add the ability to unlock with your GPG keys instead.

Secure Authentication

GPG provides three different classes of subkey you can store on a smart card. The first type of key is for encryption, the second is for signing and the third is for authentication. The authentication subkey is often not created and isn’t in wide use, but it’s something you can use for secure authentication for services such as SSH and there are already plugins in place to allow this kind of functionality. This would mean you could store SSH authentication secrets safely on your smart card and SSH from your Librem 5 to remote machines without risking that your SSH private key might be copied and reused by an attacker.

General-Purpose Secret Protection

We can also take advantage of the smart card to provide a general method to protect other secrets on the device by encrypting them with your GPG keys stored on the smart card. You might use this not only to encrypt local files or entire removable SD cards, but you could also use it to protect a local password vault or other sensitive files.

Recoverable Secrets

Given so much can depend on the secrets stored on a smart card, it’s important that you be able to backup and restore it in case a smart card chip fails or is lost. Because the smart card is removable and it uses standard GPG keys, you can use the same standard methods to backup and restore GPG private keys like you would with a laptop and a Librem Key. If a secure enclave chip fails you are out of luck, but because the smart card is not only removable, it is relatively inexpensive, you can easily create a backup smart card when you first set your keys up and store it in a safe place along with a thumb drive that contains your full set of private keys.

User Personas

The workflow I’m most excited about with smart cards on the Librem 5 is the idea of user personas. You can configure the Librem 5 so that all of the things that make up a person’s unique identity on the Librem 5 are removable: the cellular modem (IMEI), the cellular SIM card (IMSI), the WiFi card (MAC address), the microSD card (personal files and settings), and the smart card (user secrets).

A common problem people have with their smartphone is how to handle their sensitive data when they travel. When you travel you are not only more likely to lose your phone or have it be stolen, you also risk a customs official searching through or even copying your data as you cross a border. While some people have come up with elaborate steganography or safe word approaches to attempt to smuggle data across borders, the most secure approach is one where you travel only with data you can afford to lose such that you can fully comply with any customs agent requests. This is where user personas come in.

If you think of a user persona like a traditional user on a Linux system you can imagine a scenario where that user’s home directory is stored on a removable microSD card and secured with GPG keys on a smart card. You can then get a different pair of microSD cards and smart cards for each user persona you want to set up.

For instance you might have a “normal” persona that contains your personal contact list, personal social media account secrets, your personal calendar, work files and similar information. All of these files, settings and secrets would be protected by GPG keys on a smart card. To set up a new persona you could then store (or backup) all of that information on a microSD card and wipe that user account clean.

Travel Persona

For travel you can then create a “travel” persona that contains only the files, accounts, contacts, and calendar entries you need for your trip. These files and settings can be protected with a different set of GPG keys you have stored on a separate smart card. Then if your phone gets lost or stolen or if a customs agent searches it, the only files and settings that are at risk are the ones you explicitly need for travel. If you want to go an extra step you could even swap out the cellular modem (for instance if you are a US citizen traveling to Europe you might swap the US version of our cellular modem for the EU version so you have access to more European cellular frequencies). Then when you arrive you can get a local prepaid SIM and be on your way with a completely compartmentalized travel persona. When you return from your trip you can swap back in your personal microSD card and smart card and restore your personal persona.

Conclusion

I hope now that you are as excited as I am about the possibilities an integrated smart card reader gives you with the Librem 5. We constantly aim to balance your freedom, security and privacy and develop solutions that provide you not only with strong security, but also full control over your own hardware. We feel the OpenPGP smart card reader in the Librem 5 is another example of a strong but open security method that puts all the keys in your hands.

The post Your Own Personal Enclave: The Smart Card Reader on the Librem 5 appeared first on Purism.

15 May 03:14

The Urban Forest of Vancouverism

by Gordon Price

This is what we planted in the 1990s: a landscape design from the post-Expo era that has come to be known as ‘Vancouverism.”

Downtown South was in a post-rezoning boom, and Hong Kong investment, families and sensibilities were arriving – evident on the 800-block of Hamilton Street where the major tower, completed in 1995, is named ‘Jardine’s Lookout’ (a mountain and residential area on Hong Kong Island).

Now, a generation later, it is surrounded by a maturing urban forest.

The 1991 Downtown South rezoning was accompanied by a neighbourhood-specific streetscape manual in 1994, meant to provide a greener, quieter identity on what would otherwise be traffic-heavy arterials.  Influenced by Erickson and Oberlander’s landscaping of Robson Square, the sidewalks would all have a double row of trees, with increased setbacks and, in this case, a heritage garden (all paid for by the developers, from building to curb.)

Note how there are four levels of landscaping, from bushes and hedges at grade, to the rows of trees, to the gardens on decks and roofs.  Foliage surrounds the pedestrian on every side, and above, proving that high-density urban environments can be greener and more lush than any grass-dominant suburb.

Regrettably, the curb-adjacent planting strips (inspired by West-End boulevards) could not handle the foot traffic along the metered streets, and so the grass has been replaced over time with asphalt, brick, concrete and astroturf.  Having been the councillor who pushed for grass curbing in the original urban design, I regret my over-optimism on its survival, but do wish we had gone for something both permeable and able to withstand the wear-and-tear.

This is an urban forest in its adolescence.  And it’s not the only block.  Throughout Downtown South, from Robson to Pacific, Granville to Yaletown, the streets are becoming so lush and thick with foliage, we’ll already have to consider how we’re going to thin them out.

 

15 May 03:14

Request for comment: how to collaboratively make trustworthy AI a reality

by Mark Surman

A little over a year ago, I wrote the first of many posts arguing: if we want a healthy internet — and a healthy digital society — we need to make sure AI is trustworthy. AI, and the large pools of data that fuel it, are central to how computing works today. If we want apps, social networks, online stores and digital government to serve us as people — and as citizens — we need to make sure the way we build with AI has things like privacy and fairness built in from the get go.

Since writing that post, a number of us at Mozilla — along with literally hundreds of partners and collaborators — have been exploring the questions: What do we really mean by ‘trustworthy AI’? And, what do we want to do about it?

 How do we collaboratively make trustworthy AI a reality? 

Today, we’re kicking off a request for comment on  v0.9 of Mozilla’s Trustworthy AI Whitepaper — and on the accompanying theory of change diagram that outlines the things we think need to happen. While I have fallen out of the habit, I have traditionally included a simple diagram in my blog posts to explain the core concept I’m trying to get at. I would like to come back to that old tradition here:

This cartoonish drawing gets to the essence of where we landed in our year of exploration: ‘agency’ and ‘accountability’ are the two things we need to focus on if we want the AI that surrounds us everyday to be more trustworthy. Agency is something that we need to proactively build into the digital products and services we use — we need computing norms and tech building blocks that put agency at the forefront of our design process. Accountability is about having effective ways to react if things go wrong — ways for people to demand better from the digital products and services we use everyday and for governments to enforce rules when things go wrong. Of course, I encourage  you to look at the full (and fancy) version of our theory of change diagram — but the fact that ‘agency’ (proactive) and ‘accountability’ (reactive) are the core, mutually reinforcing parts of our trustworthy AI vision is the key thing to understand.

In parallel to developing our theory of change, Mozilla has also been working closely with partners over the past year to show what we mean by trustworthy AI, especially as it relates to consumer internet technology. A significant portion of our 2019 Internet Health Report was dedicated to AI issues. We ran campaigns to: pressure platforms like YouTube to make sure their content recommendations don’t promote misinformation; and call on Facebook and others to open up APIs to make political ad targeting more transparent. We provided consumers with a critical buying guide for AI-centric smart home gadgets like Amazon Alexa. We invested ~$4M in art projects and awarded fellowships to explore AI’s impact on society. And, as the world faced a near universal health crisis, we asked  questions about how issues like AI, big data and privacy will play during — and after — the pandemic. As with all of Mozilla’s movement building work, our intention with our trustworthy AI efforts is to bias towards action and working with others.

A request for comments

It’s with this ‘act + collaborate’ bias in mind that we are embarking on a request for comments on v0.9 of the Mozilla Trustworthy AI Whitepaper. The paper talks about how industry, regulators and citizens of the internet can work together to build more agency and accountability into our digital world. It also talks briefly about some of the areas where Mozilla will focus, knowing that Mozilla is only one small actor in the bigger picture of shifting the AI tide.

Our aim is to use the current version of this paper as a foil for improving our thinking and — even more so — for identifying further opportunities to collaborate with others in building more trustworthy AI. This is why we’re using the term ‘request for comment’ (RFC). It is a very intentional hat tip to a long standing internet tradition of collaborating openly to figure out how things should work. For decades, the RFC process has been used by the internet community to figure out everything from standards for sharing email across different computer networks to best practices for defeating denial of service attacks. While this trustworthy AI effort is not primarily about technical standards (although that’s part of it), it felt (poetically) useful to frame this process as an RFC aimed at collaboratively and openly figuring out how to get to a world where AI and big data work quite differently than they do today.

We’re imagining that Mozilla’s trustworthy AI request for comment process includes three main steps, with the first step starting today.

Step 1: partners, friends and critics comment on the white paper

During this first part of the RFC, we’re interested in: feedback on our thinking; further examples to flesh out our points, especially from sources outside Europe and North America; and ideas for concrete collaboration.

The best way to provide input during this part of the process is to put up a blog post or some other document reacting to what we’ve written (and then share it with us). This will give you the space to flesh out your ideas and get them in front of both Mozilla (send us your post!) and a broader audience. If you want something quicker, there is also an online form where you can provide comments. We’ll be holding a number of online briefings and town halls for people who want to learn about and comment on the content in the paper — sign up through the form above to find out more. This phase of the process starts today and will run through September 2020.

Step 2: collaboratively map what’s happening — and what should happen 

Given our focus on action, mapping out real trustworthy AI work that is already happening — and that should happen — is even more critical than honing frameworks in the white paper. At a baseline, this means collecting information about educational programs, technology building blocks, product prototypes, consumer campaigns and emerging government policies that focus on making trustworthy AI a reality.

The idea is that the ‘maps’ we create will be a resource for both Mozilla and the broader field. They will help Mozilla direct its fellows, funding and publicity efforts to valuable projects. And, they will help people from across the field see each other so they can share ideas and collaborate completely independently of our work.

Process-wise, these maps will be developed collaboratively by Mozilla’s Insights Team with involvement of people and organizations from across the field. Using a mix of feedback from the white paper comment process (step 1) and direct research, they will develop a general map of who is working on key elements of trustworthy AI. They will also develop a deeper landscape analysis on the topic of data stewardship and alternative approaches to data governance. This work will take place from now until November 2020.

Step 3: do more things together, and update the paper

The final — and most important — part of the process will be to figure out where Mozilla can do more to support and collaborate with others. We already know that we want to work more with people who are developing new approaches to data stewardship, including trusts, commons and coops. We see efforts like these as foundational building blocks for trustworthy AI. Separately, we also know that we want to find ways to support African entrepreneurs, researchers and activists working to build out a vision of AI for that continent that is independent of the big tech players in the US and China. Through the RFC process, we hope to identify further areas for action and collaboration, both big and small.

Partnerships around data stewardship and AI in Africa are already being developed by teams within Mozilla. A team has also been tasked with identifying smaller collaborations that could grow into something bigger over the coming years. We imagine this will happen slowly through suggestions made and patterns identified during the RFC process. This will then shape our 2021 planning — and will feed back into a (hopefully much richer) v1.0 of the whitepaper. We expect all this to be done by the end of 2020.

Mozilla cannot do this alone. None of us can

As noted above: the task at hand is to collaboratively and openly figure out how to get to a world where AI and big data work quite differently than they do today. Mozilla cannot do this alone. None of us can. But together we are much greater than the sum of our parts. While this RFC process will certainly help us refine Mozilla’s approach and direction, it will hopefully also help others figure out where they want to take their efforts. And, where we can work together. We want our allies and our community not only to weigh in on the white paper, but also to contribute to the collective conversation about how we reshape AI in a way that lets us build — and live in — a healthier digital world.

PS. A huge thank you to all of those who have collaborated with us thus far and who will continue to provide valuable perspectives to our thinking on AI.

The post Request for comment: how to collaboratively make trustworthy AI a reality appeared first on The Mozilla Blog.

15 May 03:14

No slacking allowed: Companies keep careful eye on work-from-home productivity during COVID-19

Dianne Buckner, CBC News, May 14, 2020
Icon

"Bosses can use a "dashboard" that shows data about individual workers, including their screen time, the activity of their computer mouse, shots of what's on an employee's screen at any given time, and in some cases, even their physical location via GPS." I'll say it now and bluntly: if you're an employer, this is the stupidest thing you could be doing. Instead of spying on employees, you should be defining what it means to be productive in terms of output, and to evaluate them based on that. Moreoever, if you want any trace of loyalty from your work force, you will treat them like adults, not children. Actually - correction - even children should be treated like this.

Web: [Direct Link] [This Post]
15 May 03:13

It’s forever 1999 for online learning critics

Martin Weller, The Ed Technie, May 14, 2020
Icon

Martin Weller is disappointed by "the resurrection of so many bad takes on online learning that I had encountered in 1999 and thought we had moved on from." These bad takes started almost immediately following the online pivot, a lot of them from online educators (I noted it at the time and got dumped on by a number of people, including Weller). Anyhow, we both agree with this: "The least interesting thing you can do is replicate the not very effective model of the lecture. We had these discussions back in 1999, and people explored problem based learning, constructivism, collaborative learning, and then later connectivism and flipped learning." You know... online learning should be interesting, dynamic, and fun...

Web: [Direct Link] [This Post]
14 May 03:31

The New Yorker Comes Back Home

by peter@rukavina.net (Peter Rukavina)

In September my subscription to The New Yorker expired. I didn’t renew it.

I’d been a subscriber for more than 25 years, and I felt that a lapsed subscription, due nothing more than a change in credit card number, deserved a call from Calvin Trillin to make sure everything was okay.

No call came. So I let it lapse.

Eight months passed.

Today I opened my mailbox to find the May 4, 2020 issue waiting for me, with a label indicating expiration a year hence.

It was not outside the realm of possibility that I had inadvertently renewed the subscription: I haven’t been 100% myself for some time; who knows what I get up to in my dark midnights.

I called The New Yorker.

I talked to the friendliest person I’ve ever talked to on the phone. Way better than Calvin Trillin.

She looked up my record.

“You’ve received a gift subscription!”, she exclaimed.

“From whom?”, I asked, surprised.

“All it says in the system is ‘a friend’,” she replied.

“Isn’t it nice when that happens: you’ll always be wondering…”, she added.

“Indeed I will,” I chuckled.

We said hearty goodbyes, better-than-Calvin-Trillin clerk and I.

Whoever you may be, friend, þúsund takk.

14 May 03:31

Deno 1.0

Deno 1.0

Deno is a new take on server-side JavaScript from a team lead by Ryan Dahl, who originally created Node.js. It's built using Rust and crammed with fascinating ideas - like the ability to import code directly from a URL.

14 May 02:48

Welcome Adam Seligman, Mozilla’s new Chief Operating Officer

by Mitchell Baker

I’m excited to announce that Adam Seligman has joined Mozilla as our new Chief Operating Officer. Adam will work closely with me to help scale our businesses, growing capabilities, revenue and impact to fulfill Mozilla’s mission in service to internet users around the world.

Our goal at Mozilla is to build a better internet. To provide products and services that people flock to, and that elevate a safer, more humane, less surveillance and exploitation-based reality. To do this —  especially now — we need to engage with our customers and other technologists; ideate, test, iterate and ship products; and develop revenue sources faster than we’ve ever done.

Adam has a proven track record of building businesses and communities in the technology space. With a background in computer science, Adam comes to Mozilla with nearly two decades of experience in our industry. He managed a $1B+ cloud platform at Salesforce, led developer relations at Google and was a key member of the web platform strategy team at Microsoft.

Adding Adam to our team will accelerate our ability to solve big problems of online life, to create product offerings that connect to consumers, and to develop revenue models in ways that align with our mission. Adam is joining Mozilla at a time when people are more reliant than ever on the internet, but also questioning the impact of technology on their lives. They are looking for leadership and solutions from organizations like Mozilla. Adam will help grow Mozilla’s capacity to offer a range of products and services that meet people’s needs for online life.

“The open internet has brought incredible changes to our lives, and what we are witnessing now is a massive acceleration,” said Adam Seligman, Mozilla’s new Chief Operating Officer. “The open internet is keeping our industries running and our children learning. It’s our lifeline to family and friends, and it’s our primary source of information. It powers everything from small business to social movements. I want to give back to an internet that works for people — not against them. And there is no better champion for a people-first vision of the internet than Mozilla.”

In his capacity as Chief Operating Officer, Adam will lead the Pocket, Emerging Technologies, Marketing and Open Innovation teams to accelerate product growth and profitable business models, and work in close coordination with Dave Camp and the Firefox organization to do the same.

I eagerly look forward to working together with Adam to navigate these troubled times and build a vibrant future for Mozilla’s product and mission.

The post Welcome Adam Seligman, Mozilla’s new Chief Operating Officer appeared first on The Mozilla Blog.

14 May 02:07

Librem 5 Dogwood Update

by Purism

Summary: We are almost at the end of the Dogwood board verification and have found and fixed a number of issues with the initial Dogwood boards. We believe we will be able to complete testing and start shipping Dogwood phones out within a few weeks. We have also been working on Evergreen in parallel to procure the remaining components we need for mass production.

We know the community is eager to hear any updates we might have about the Librem 5. Like with our Birch and Chestnut updates, we are trying our best to give you correct information for each batch with a reasonable level of confidence without venturing into speculation or guesses. This is especially important when it comes to reporting hardware updates as it can take time and iterations to trace down a problem into the component or mistake that caused it and often first guesses for a root cause prove to be incorrect.

Like everyone else we have been impacted by the COVID-19 pandemic and back in February we published an overall shipping update based on the best information we had at the time including dates for Dogwood:

  • End of March: Librem 5 Dogwood internal hardware revision for the Purism team
  • End of April: Librem 5 Dogwood batch for customer pre-orders

We started receiving the internal Dogwood hardware revision a bit later than expected in April and started the work to test and verify the changes. As we mentioned in our Breaking Ground post, the Librem 5 is an entirely new phone built from the ground up and that means significantly more testing and tweaking than with an off-the-shelf design. Unlike with Birch and Chestnut batches, which only had minor tweaks on the previous batch based on problems we uncovered during testing, Dogwood represented a much larger change–a CPU flip to the other side of the PCB to help with heat dissipation–that meant a full redesign of the PCB layout.

Dogwood PCB CPU side up
Dogwood PCB CPU side up

We knew it would take longer to test Dogwood compared to Birch and Chestnut and budgeted extra weeks for the tests assuming we didn’t come across any major issues. Once testing was complete we would apply any fixes and then send out the remaining Dogwood batch to customers. Like with Birch and Chestnut we wanted to bring you along for the ride and give you updates as we have them, but up until now there honestly hasn’t been all that much to report except that the testing was in progress. This kind of testing of every component in a long checklist is tedious work and is only interrupted with a bit of excitement when something doesn’t work as expected and you must drop everything to figure out why.

Dogwood Hardware Fixes

It’s been about a month since we started that verification and as expected we did find some issues with the initial Dogwood boards. Overall the glitches we found have been minor from a design and fix perspective such as an incorrect MOS FET transistor part and a minor mistake in the CPU power supply, but were glitches which resulted in odd and sometimes much larger problems in the functionality of the device ranging from damaging the USB section of the CPU in one case and disabling an entire I2C bus in another! It’s taken more time than expected to trace through these problems and then apply and test fixes especially in the case of the CPU since it required reworks in the PCB that were difficult to make on the existing boards and new CPUs.

Dogwood PCB CPU side down
Dogwood PCB CPU side down

Another system we have been looking into since Birch is GNSS (aka GPS). GNSS is a pretty delicate thing, the GNSS satellite signals are so weak, it’s like trying to spot an 80W light bulb in orbit (that’s about the energy equivalent). What is received on the ground here on earth is usually below the noise level of the antenna and takes pretty significant signal processing to get data from it. This means that the GNSS antenna signal path must be pretty close to perfect.

Besides this with Chestnut we discovered that ST Micro, the manufacturer of the TESEO LIV3 GNSS multi constellation receiver we are using in the L5, silently changed the recommendations for the antenna input electronics design. Now in Dogwood we see that GNSS reception still wasn’t great so we went back to suppliers and the drawing board, applied some more changes and are now finally getting a fix!

Of course, now is the time to apply all of these fixes, before we go into mass production. As you might expect, changes are much more complicated with larger volumes, which is why we chose the public small batch approach to begin with and invited you to come along with us for this ride. At this point we have successfully verified almost all of the components with only a handful left to go and we feel we are pretty close to being able to greenlight the remaining Dogwood PCB production within a week so we can start shipping a few weeks after that. While this has been going on we have also been working in parallel on Evergreen to complete procurement of the components we’ll need for mass production.

What’s Next

Now that we are almost to the end of Dogwood hardware verification, soon we will be able to shift focus from testing individual components on a PCB to evaluating Dogwood and the CPU flip in particular as a whole. We know many people are curious about the impact of the CPU flip on heat dissipation and we intend to publish more detailed analysis soon in a future update.

The post Librem 5 Dogwood Update appeared first on Purism.

14 May 02:07

Lessons From the Three Waves of the 1918 Spanish Flu Pandemic & Vancouver’s Response

by Sandy James Planner
mkalus shared this story from Price Tags.

When the Covid Crisis is over and we are in the After Times there are going to be some major changes in the societal landscape.  Changes also happened at the end of the Spanish Flu  one hundred years ago when it was realized that while mortalities were high with poor population, infectious diseases need to be tackled on a community level. This led to public health strategies, disease surveillance, and the concept of universal health care.

I have written about the Spanish Flu of 1918  and how New York City had lower mortality rates than other cities by embarking upon a public health model. They  organized public health infrastructure, ensured the distancing of the healthy from the infected, held public health campaigns and undertook disease surveillance.

In New York City and in Vancouver there were three waves of the Spanish Flu. In New York City the flu started in September 1918 with the last wave in February 1919. In Vancouver the flu arrived probably by train with soldiers with the Siberian Expedition Forces in October 1918 and the last wave of flu came in March 1919. It was difficult to detect the flu  as the virus could not be isolated, and the bacillus causing infection did not show up in cultures. It was also not the flu itself that killed victims, it was secondary pneumonia resulting from the infection.

In New York and in Vancouver it was the young who were impacted by the flu, and being employed increased your likelihood to exposure and death.

While there was an overall Medical Health Officer for British Columbia there was also one for the City of Vancouver. In 1918 these two physicians were at odds.

The Medical Health Officer of Vancouver was Dr. Frederick Underhill who was loathe to close industry or business, and did not want to close schools. Dr. Underhill thought that children were more at risk roaming on the streets than being in school. As sickness started in October 1918 parents started to pull their children out of school, and Dr. Underhill was forced to close the schools after ten percent of the students did not show up. Unfortunately for the second and third waves of the flu in January and March 1919 schools, industries and offices remained open, and Vancouver had about 900 deaths in a population of 100,000, a mortality rate  four times that of most cities.

Other towns in British Columbia which had closed down because of the Spanish Flu also wanted Vancouver to close industries and offices. But Vancouver politicians debated the best approach, even questioning whether isolation from the sick and face masks were even  needed. The Provincial Medical Health Officer was direct in telling Vancouver citizens to avoid crowds, not to  use public towels and not to use of public drinking cups. There was also some questionable advice from the Province of keeping your feet dry and not wearing any low necklines.

The sad reality is that a complete closure of Vancouver  in 1918 would have stopped the transmission of disease faster.

A thesis written by Sarah Buchanan in 2007 on the Spanish Flu Epidemic of 1918 in British Columbia also reminds that the current pandemic is not likely to be the only one. Thirteen years ago she quoted a study published in Science that documented the mutation of an A/H5N1 avian influenza virus

“from a non-airborne transmission virus to one that is capable of aerosol or respiratory droplet transmission in ferrets. As a result of this research, the authors have concluded that these highly pathogenic avian influenza A/H5N1 viruses “have the potential to evolve directly to transmit by aerosol or respiratory droplets between mammals, without reassortment in any intermediate host, and thus pose a risk of becoming pandemic in humans”.

 

Sarah Buchanan concludes

pandemic influenza in the future is not unlikely and understanding the patterns associated with influenza pandemics of the past can provide insight into the management and/or prevention of future pandemics.”

Images: Vancouver Sun,Orca

 

 

14 May 02:07

Climate change displayed, with shower tiles

by Nathan Yau

Based on a chart by Ed Hawkins, the shower wall of Gretchen Goldman and Tom Di Liberto transformed into a canvas to show global warming.

Each row represents a country, and each cell — I mean tile — represents the temperature difference compared to the overall average for the time period.

Tags: climate, global warming, physical, shower

14 May 02:06

Global warming color stripes, as decorative conversation starter

by Nathan Yau

Ed Hawkins, who you might recognize from charts such as spiraling global temperature and the aforementioned temperature grid, encourages you to show your stripes. Select your region, and see how average temperature increased. I saw this last year, but I just realized that people are using this chart to print, knit, and decorate.

Emmalie Dropkin made a blanket:

Pueblo Vida Brewing and the University of Arizona Climate Systems Center used a variation of the stripes to decorate beer cans:

Gwyneth Matthews made earrings:

Hawkins also has a Zazzle store of his own, and of course there’s a face mask now:

Amazing what a few stripes can do. Check out the temperatures for your own region.

Tags: climate, Ed Hawkins, global warming, stripes

14 May 02:06

Twitter Favorites: [waxy] Chrome adds tab groups: for the tab hoarders out there https://t.co/fhnxjN2k2Z

Waxy.org @waxy
Chrome adds tab groups: for the tab hoarders out there blog.google/products/chrom…
14 May 02:06

Twitter Favorites: [hey_trev] Where are Toronto's city-owned golf courses? Which ones should be prioritized for park conversion? Well I'm glad yo… https://t.co/vR9lQauQZv

Trevor Heywood @hey_trev
Where are Toronto's city-owned golf courses? Which ones should be prioritized for park conversion? Well I'm glad yo… twitter.com/i/web/status/1…
14 May 01:27

✚ How to Make Animated Histograms in R, with ggplot and gganimate

by Maarten Lambrechts

Make them move to show a shift in distributions over time. Read More

14 May 01:26

Indigenous Authorship on Open and Digital Platforms: Social Justice Processes and Potential

Johanna Funk, Kathy Guthadjaka, Journal of Interactive Media in Education, May 13, 2020
Icon

Open learning platforms contradict indigenous ways of knowing, according to the authors. Th platforms depict indigenous knowledge as property, and they "shift authority over knowledge away from Indigenist intellectual sovereign processes" which ties knowledge (and sometimes access to knowledge) "to the identity and authority level of the intended audience." Hence, from a mostly Australian context, this paper "explores the capacity of open digital platforms to promote social justice according to how they host, incorporate, structure and disseminate Indigenous knowledges and languages." The paper calls on educational institutions to "cede digital territory possessiveness and preference for technology-centred production models" and "let go of fixation over outcome-focused and expensive technology that undermines presence of knowledge authority and ontologies and excludes use by people on Country and in remote communities." Fair enough, but I think this eventually raises tensions within indigenous communities between individual autonomy and community governance.

Web: [Direct Link] [This Post]