Shared posts

04 Jul 13:52

OpenBSD on the Microsoft Surface Go 2 (notaweblog)

by joshua stein

I used OpenBSD on the original Surface Go back in 2018 and many things worked with the big exception of the internal Atheros WiFi. This meant I had to keep it tethered to a USB-C dock for Ethernet or use a small USB-A WiFi dongle plugged into a less-than-small USB-A-to-USB-C adapter.

Microsoft has switched to Intel WiFi chips on their recent Surface devices, making the Surface Go 2 slightly more compatible with OpenBSD.

Hardware

As with the original Surface Go, I opted for the WiFi-only model with 8Gb of RAM and a 128Gb NVMe SSD. The processor is now an 8th generation dual-core Intel Core m3-8100Y.

The tablet still measures 9.65" across, 6.9" tall, and 0.3" thick, but the screen bezel has been reduced a half inch to enlarge the screen to 10.5" diagonal with a resolution of 1920x1280, up from 1800x1200.

The removable USB-attached Surface Go Type Cover contains the backlit keyboard and touchpad. I opted for the "ice blue" alcantara color, as the darker "cobalt blue" version I purchased last time is no longer available. The Type Cover attaches magnetically along the bottom edge and can be folded up against the front of the screen magnetically, issuing an ACPI sleep signal, or against the back, automatically disabling the keyboard and touchpad in expectation of being touched while holding the Surface Go.

One unfortunate side-effect of the smaller screen bezel is that the top of the Type Cover keyboard now rests very close to the bottom of the screen when the keyboard is in its raised position. With one's fingers on the keyboard, the text at the very bottom of the screen such as the status bar in a web browser can be covered up by the left hand.

The Type Cover's keyboard is quiet but has a very satisfying tactile bounce. The keys are small considering the entire keyboard is only 9.75" across, but it works for my small hands. The touchpad has a slightly less hollow-sounding click than I remember on the previous model which is a plus.

The touchscreen is an Elantech model connected by HID-over-I2C and supports pen input. The Surface Pen is available separately and requires a AAAA battery, though it works without any pairing necessary with the exception of the top eraser button which requires Bluetooth for some reason. The pen attaches magnetically to the left side of the screen when not in use.

A set of stereo speakers face forward in the top left and right of the screen bezel but they lack bass. Along the top of the unit are a power button and physical volume rocker buttons. Along the right side are a 3.5mm headphone jack, USB-C port, Surface dock/power port, and microSD card slot located behind the kickstand.

Wireless connectivity is now provided by an Intel Wi-Fi 6 AX200 802.11ax chip which also provides Bluetooth connectivity.

Firmware

The Surface Go's BIOS/firmware menu can be entered by holding down the Volume Up button, then pressing and releasing the Power button, and releasing Volume Up when the menu appears. Secure Boot as well as various hardware components can be disabled in this menu. Boot order can also be adjusted.

When powered on, holding down the power button will eventually force a power-off of the device like other PCs, but the button must be held down for what seems like forever.

Installing OpenBSD

Due to my previous OpenBSD work on the original Surface Go, most components work as expected during installation and first boot.

To boot the OpenBSD installer, dd the install67.fs image to a USB disk, enter the BIOS as noted above and disable Secure Boot, then set the USB device as the highest boot priority.

When partitioning the 128Gb SSD, one can safely delete the Windows Recovery partition which takes up 1Gb, as it can't repair a totally deleted Windows partition anyway and a full recovery image can be downloaded from Microsoft's website and copied to a USB disk.

After installing OpenBSD but before rebooting, mount the EFI partition (sd0i) and delete the /EFI/Microsoft directory. Without that, it may try to boot the Windows Recovery loader. OpenBSD's EFI bootloader at /EFI/Boot/BOOTX64.EFI will otherwise load by default.

One annoyance remains that I noted in my previous review: if the touchpad is touched or F1-F7 keys are pressed, the Type Cover will detach all of its USB devices and then reattach them. I'm not sure if this is by design or some Type Cover firmware problem, but once OpenBSD is booted into X, it will open the keyboard and touchpad USB data pipes and they work as expected.

OpenBSD Support Log

2020-05-12: Upon unboxing, I booted directly into the firmware screen to disable Secure Boot and then installed OpenBSD. Upon first full boot of OpenBSD, the kernel panicked due to acpivout not being able to initialize its index of BCL elements. I disabled acpivout for the moment.

The Intel AX200 chip is detected by iwx, the firmware loads (installed via fw_update), and the device can do an active network scan, but when trying to authenticate to a network, the device firmware reports a fatal error.

2020-05-13: I created a new umstc driver that attaches to the uhidev device of the Type Cover and responds to the volume and brightness keys without having to use usbhidaction. More importantly, it holds the data pipe open so the Type Cover doesn't reboot when the F1-F7 keys are pressed at the console. A solution will still be needed to prevent the Type Cover from rebooting when the touchpad is accidentally touched when not in X.

I tried S3 suspend and resume, but when pressing the power button to wake it up, it immediately displays the initial firmware logo and boots as if it were restarted. S4 hibernation works fine. S3 suspend works in Ubuntu, so I'll need to do some debugging to figure out where OpenBSD is failing.

2020-05-31: I imported my umstc driver into OpenBSD.

2020-06-02: I imported my acpihid driver into OpenBSD.

2020-07-05: Stefan Sperling (stsp@) has done a lot of work on iwx over the past couple months, so WiFi is much more stable and performant now.

Current OpenBSD Support Summary

Status is relative to my OpenBSD-current tree as of 2020-06-02.

Component Works? Notes
AC adapter Yes Supported via acpiac and status available via apm and hw.sensors, also supports charging via USB-C.
Ambient light sensor No Connected behind a PCI Intel Sensor Hub device which requires a new driver.
Audio Yes HDA audio with a Realtek 298 codec supported by azalia.
Battery status Yes Supported via acpibat and status available via apm and hw.sensors.
Bluetooth No Intel device, shows up as a ugen device but OpenBSD does not support Bluetooth. Can be disabled in the BIOS.
Cameras No There are apparently front, rear, and IR cameras, none of which are supported. Can be disabled in the BIOS.
Gyroscope No Connected behind a PCI Intel Sensor Hub device which requires a new driver which could feed our sensor framework and then tell xrandr to rotate the screen.
Hibernation Yes Works via the ZZZ command.
MicroSD slot Yes Realtek RTS522A, supported by rtsx.
SSD Yes SK hynix NVMe device accessible via nvme.
Surface Pen Yes Works on the touchscreen via ims. The button on the top of the pen requires Bluetooth support so it is not supported.
Suspend/resume No TODO
Touchscreen Yes HID-over-I2C, supported by ims.
Type Cover Keyboard Yes USB, supported by ukbd and my umstc. 3 levels of backlight control are adjustable by the keyboard itself with F1.
Type Cover Touchpad Yes USB, supported by the umt driver for 5-finger multitouch, two-finger scrolling, hysteresis, and to be able to disable tap-to-click which is otherwise enabled by default in normal mouse mode.
USB Yes The USB-C port works fine for data and charging.
Video Yes inteldrm has Kaby Lake support adding accelerated video, DPMS, gamma control, integrated backlight control, and proper S3 resume.
Volume buttons Yes Supported by my acpihid driver.
Wireless Yes Intel AX200 802.11ax wireless chip supported by iwx.
16 May 19:55

Meta Blog

My recent Amazon-exit piece got an order of magnitude more traffic then even the post popular outings here normally do. Which turned my mind to thoughts of blogging in 2020, the why and how of the thing. Here they are, along with hit-counts and referer data from last week. Probably skip this unless you’re interested in social-media dynamics and/or publishing technology.

Numbers

In the first week after publication, Bye, Amazon was read somewhat more than 614,669 times by a human (notes on the uncertainty below).

The referers with more than 1% of that total were Twitter (18.13%), Hacker News (17.09%), Facebook (10.55%), Google (4.29%), Vice (3.40%), Reddit(1.66%), and CNBC (1.09%). I think Vice was helped by getting there first. I genuinely honestly have no idea how the piece got mainlined so fast into the media’s arteries.

For comparison, my first Twitter post is up to 1.015M impressions as of now and, last time I checked, Emily’s was quite a ways ahead.

It’s hard for me to know exactly how many people actually read any one ongoing piece because I publish a full-content RSS/Atom feed. Last time I checked, I estimated 20K or so subscribers, but nobody knows how many actually read any given piece. If I cared, I’d put in some kind of a tracker. That 614K number above comes from a script that reads the log and counts the number of fetches executed by a little JavaScript fragment included in each page. Not a perfect measure of human-with-a-browser visits but not terrible.

But aren’t blogs dead?

Um, nope. For every discipline-with-depth that I care about (software/Internet, politics, energy economics, physics), if you want to find out what’s happening and you want to find out from first-person practitioners, you end up reading a blog.

They’re pretty hard to monetize, which means that the people who write them usually aren’t primarily bloggers, they’re primarily professional economists or physicists or oil analysts or Internet geeks. Since most of us don’t even try to monetize ’em, they’re pretty ad-free and thus a snappy reading experience.

Dense information from real experts, delivered fast. Why would you want any other kind?

Static FTW

ongoing ran slow but anyone who was willing to wait fifteen seconds or so got to read that blog. One reason is that the site is “static” which is to say all the payload is in a bunch of ordinary files in ordinary directories on a Linux box, so the web server just has to read ’em out of the disk (where by “disk” I mean in-memory filesystem cache when things are running hot) and push ’em out over the wire. (The bits and pieces are described and linked to from the Colophon.)

It turns out that at the very hottest moments, the Linux box never got much above 10% CPU, but the 100M virt NIC was saturated.

I’ve never regretted writing my own blogging system from the ground up. I’m pretty sure I’ve learned things about the patterns of traffic and attention on the Internet that I couldn’t have learned any other way.

If I were going to rewrite this, since everything’s static, I’d just run it out of an S3 bucket and move the publishing script into a Lambda function. It’d be absurdly cheap and it’d laugh at blog storms like last week’s.

It’s not a top priority.

16 May 19:55

Three Approaches To Increasing Engagement In A Community

by Richard Millington

If you want to increase engagement, you have three channels open to you.

1) Optimization
This is by far the most common (and least effective) approach.

In this approach, you improve what you’re already doing. You might improve and upgrade the design of your community, develop a better journey for newcomers, initiate more exciting and engaging discussions. You might work with top members to deliver, better, quicker, responses to community questions.

This is still definitely worth doing, but there’s a law of diminishing returns at play here. You can spend a huge amount of time and resources to see paltry returns. Once you have gotten your community to a high standard, there are better channels to increase engagement.

2) Promotion
This is where you try to get more people to join the community (it’s worth optimizing before growing). You need to ensure the community is featured prominently on your main site, integrate it deeply within your products/services (it should only ever be a click away), and craft newsletters or emails to your audience driving them to the community.

You can also try paid social ads, influencers, and advertising in other channels to drive more people to the community. Undertaking an SEO audit and optimizing for each is also a wise idea in many (if not most) types of communities.

Again there is a law of diminishing returns here. Once you’ve reached most of the people likely to join, you’re either reaching the same people again or reaching people less likely to join.

3) Expansion
The third approach is the most strategic and most successful. If you want more engagement expand the focus of your community so there are more things they can engage in. In most communities, people only visit when they have a question. It doesn’t matter how nicely you ask them to participate, they won’t if they don’t have more questions to ask.

This is why you need to expand the topics your members can ask about. For example, a community focused on a software product for project managers might also create channels where project managers can ask and talk about broad project-management related topics. You might target specific audiences (newcomers, veterans, people working in specific fields) and create unique groups and areas for them.

Your goal here is to expand the value of the community to satisfy more people in your audience. Sure, they might not always be talking about the intricacies of your products and services, but they’re constantly talking in a place you control and being influenced by your biggest supporters.

This approach is research-heavy. You use a survey to develop your target audiences, identify each of their needs, and gradually expand the concept of the community to cater to each of them.

The catch is you can’t expand too fast. If you don’t have enough activity to sustain a critical mass at the topics you’re covering today, then expanding won’t help. But if you’re managing a larger and more mature community, expansion is usually the way to go.

These steps are generally sequential too. First, you improve what your community is doing today. Second, you get more people to visit the community. Third, you expand the focus of your community to satisfy more needs of more audiences.

16 May 19:54

We’re Hiring: Social Media Person

by David McCandless

We now have over one million people following us across our social channels! Thank you!

So we need a tech-savvy social ninja to help us connect & stay in touch with this audience.

Like what we do? Got good social sense / experience? Love connecting with communities online, especially via beautiful infography? Get in touch!.

Freelance, part-time, remote, flexitime. Closing date: 1st Jun 2020.

» Apply here

16 May 19:54

Lost in time and space

by Chris Grey
This week saw the third round (by videoconference) of the increasingly surreal post-Brexit negotiations, which rumble on as the one supposedly immutable thing in a world otherwise transformed by the coronavirus crisis. Unsurprisingly, the indications so far (there is a finally plenary session today and so something may change, but I doubt it) are that the impasse continues.

David Frost, the UK’s Chief Negotiator, has reportedly briefed the cabinet to the effect that the EU are still refusing to grant the same deals that it has done with other independent countries. In other words, the flawed logic of ‘sovereign equals’ discussed in my recent post continues to guide the UK’s approach. As noted there, it is an approach that almost guarantees that the negotiations will fail. Next month, when the transition period extension decision deadline looms, is likely to be crucial as it also marks the point when, in the past, the government has said that it might simply end negotiations if there is no deal in prospect.

As always it’s worth recalling, as the 2016 Referendum retreats into history, that the current situation grows organically out of the fact that the Brexiters never had an agreed plan for how Brexit should be done, despite the fact that they had spent years scheming and dreaming for it to happen. Since then, at every stage, the hardest of Brexiters have driven the meaning of Brexit in an ever-harder direction. Thus we have arrived at the present point when virtually any kind of deal is ruled out by red lines that have now turned the deepest crimson.

The plan was that no plan was needed

Yet this does not mean, as unworldly cynics often claim, that no deal was ‘the plan all along’. It’s far worse than that: the central truth about Brexit is that there was no plan all along. It’s true, no doubt, that there are some Brexiters who have always wanted a no deal outcome. But there are plenty of others who genuinely harbour the fantasy that it should be quick and easy to do a deal, and that Britain ‘holds all the cards’. They didn’t for the most part cynically deceive the voters, they naively deceived themselves. And, moreover, the current UK negotiators are almost certainly fully and genuinely in sway to the theology of the Brexit Ultras that, if met with sufficiently firm resolve, the EU will break its own red lines.

This is not inconsistent with having no plan for Brexit. On the contrary, it is one of the reasons for having no plan, since the assumption is that all that is needed is sufficient determination and, no doubt, ‘bulldog spirit’ for all obstacles to disappear. Such reasoning ought to have been discredited by May’s government’s initial attempt – led by David Davis – to negotiate in just that spirit.

But in the circular logic of Brexiters, the failure of their claims invariably proves that they were right all along. For example, this week Douglas Carswell argued that the EU “constantly failed to make the concessions they ought to have made” (my emphasis) and so “administered to Theresa May the equivalent of a punishment beating”. On this analysis, all would have been well had May not – with the Chequers Proposal that saw the resignations of Davis and Johnson – blinked in the face of EU pressure (rather than belatedly, and partially, recognized the economic damage of the original approach).

One serious consequence of this is that the hard core Brexiters have never accepted the Withdrawal Agreement (WA) she negotiated, large chunks of which survived in that which Boris Johnson signed, as fully legitimate. Rather, it is seen as testimony to May’s lack of steel and as the legacy of ‘the remainer parliament’. In due course, if there is no trade deal, this will reappear in the form of questioning what was agreed about the financial settlement, for the Ultras never accepted that this should have been signed off in advance of a trade deal being struck. The aborted ‘row of the summer’ (of 2017) over sequencing still rankles.

In the meantime, it is evident in the way that, having shifted commitment on the Level Playing Field (LPF) from the WA to the non-binding Political Declaration, Johnson’s government are treating even that as some new imposition from Brussels (in fact, it was spelled out at least as early as March 2018). Hence this is now routinely referred to as the “so called” Level Playing Field, as for example in Michael Gove’s recent statement. Brexiter commentators, meanwhile, use more colourful, and yet strangely self-pitying, language, with Ambrose Evans-Pritchard recently suggesting that the EU is “trying to impose terms appropriate for a country defeated in war” (£).

Brexit: unmoored from history and geography

Such hyperbole is the counterpart of the ‘sovereign equals’ idea, but a particular twist on that, which is currently the government’s constant refrain, is that the EU is refusing to understand that the UK has now left and is not still a member state negotiating exit terms (£). This is deeply disingenuous. For whilst, of course, it is literally and legally true, it implies that the UK has suddenly appeared out of nowhere as if with no history and is seeking to create a relationship with the EU from scratch.

Yet, clearly, the form of the future relationship cannot be independent of the near 50-year period that preceded it, or unaffected by the very high level of economic and political interconnection that exists as a result. Moreover, to the extent that the UK is still interconnected in this way because of the transition period, it is nonsense to treat the legal truth of having departed the EU as negating the empirical truth of those interconnections. It’s self-evidently the case that The UK isn’t ‘just another’ third country seeking a relationship with the EU: it is in a half-way house between having been a member and creating a third country relationship.

That can also be put in two different ways. First, it means something which Brexiters have never understood, namely that Brexit is a process and not an event. That failure is precisely encapsulated in the idea that, as of 31 January 2020, the UK’s departure was cut and dried, when in fact the terms of that are still a matter of ongoing implementation (as regards the WA) and ongoing negotiation (as regards the future terms). Second, it puts paid to the entire gambit of seeking ‘precedents’ from other EU relationships with third countries that is associated with the ‘sovereign equals’ logic. For there is no precedent for Brexit – it is a unique process, without precedent, and therefore the form of the future relationship will derive from those unreplicable conditions.

So, context matters. History matters. And of course geography matters. This week a tweet from Robbie Gibb – formerly Director of Communications at 10 Downing Street and an enthusiast for Brexit – opined that “in the modern world of international commerce and trade, geography is irrelevant. It’s just the EU excuse to keep us in its orbit”. It was widely mocked, but Gibb was only saying what is an article of faith to Brexiters. Not only is the UK envisaged as having a completely ahistorical relationship with the EU, but also one in which the spatial reality of proximity is irrelevant, except as an EU ploy.

This lies at the heart of what public policy academic Gabriel Siles-Brugge of Warwick University has aptly called “the emotive political economy” of Brexit trade policy. By contrast, the ‘gravity model’ of trade is just about the most robust empirical observation in economics, and even for services traded electronically there is no real evidence for what Liam Fox called “a post-geography trading world”. As I’ve argued before, Brexiters in both their ‘nationalist’ and ‘globalist’ variants fail to understand that the name of the modern game is regionalisation. Inescapably, the UK’s region is the continent of Europe, most of which is in the EU.

A consistent approach

The key to unlocking the Brexiters’ misunderstanding of this is to see that it is entirely consistent with their overall approach to the current negotiations. In a revealing tweet in 2018 Dominic Raab stated that “Remainers believe UK prosperity depends on its location, Brexiters believe UK prosperity depends on its character”. So just as no plan is needed if the UK shows sufficient ‘determination’, so the brute fact of geographical location will wilt in the face of ‘character’.

It’s no coincidence that Boris Johnson’s endless gung-ho rhetoric appeals to them: it’s not that he has the salesman’s trick of selling the product, it’s that the rhetorical trickery and the product are one and the same. That’s why, from a Brexiter point of view, constant appeals to people to ‘believe’ in and ‘get behind’ Brexit make perfect sense. With such belief, commitment and character all the inconvenient facts of history, geography (and economics) become irrelevant.

Unhampered by this peculiar metaphysics, it is neither capricious nor manipulative of the EU to seek a relationship with the UK which is modulated by the history and geography of the two parties. On the contrary, it is a remarkable folly on the part of the UK to imagine that these can or should be ignored, or that a pivot to ‘the Anglosphere’ or, even more absurdly, the Commonwealth, represents a viable strategy or a realistic calculation of self-interest. That is a critique of Brexit, of course, but given that Brexit is happening there is no reason to compound that other than the implacable hatred that Brexiters hold for the EU and for any proposal emanating from it. Even if one harboured such hatred, it is strange to carry it to the extent of inflicting national self-damage even after having achieved Brexit.

However, with the Ultra Brexiters now firmly in charge of the government, it seems that every aspect of the negotiations, including the constant attempts to resile from the terms of the Northern Ireland Protocol (although this week has seen some signs of realism about this), are to be conducted in this spirit of half-paranoid, half-bellicose antagonism. Since this of course is daily eroding what little trust the EU has in the UK as a reliable and responsible negotiating partner the growing danger is not just of there being no future terms deal but of there being long-term diplomatic and economic conflict. This isn’t remotely what voters were told Brexit would mean but, then, nothing that they were told by the Vote Leave campaign was true.

Canadian conundrums

So, for now, the negotiations limp on. It’s no longer entirely clear whether Brexiters know what it is that they are aiming for. It used to be an article of faith that the aim was a “best in class” Free Trade Agreement (FTA) and a “fantastic new partnership” - what else, after all, would be good enough for Brexit Britain? In the past that meant, at least, a ‘Canada-style’ (aka CETA) deal. For example, as recently as February, Dominic Raab explicitly elided the two in talking of a “Canada-style, best in class, free trade agreement”. Mysteriously, though, such an agreement is now being described – Ambrose Evans-Pritchard again (£) - as a “skinny FTA” which would be “no tariffs or quotas but nothing special” or, more commonly, as a “bare bones” deal.

It’s difficult to disentangle all the shifting meanings here (there are shades of the way that ‘hard Brexit’ used to mean ‘Canada-style’ but morphed into meaning ‘no deal’). CETA is not, in fact, an entirely zero tariffs deal, and at the House of Lords EU Committee last week Michael Gove talked of downgrading UK demands from zero tariffs so as to “end up like Canada”. So ‘Canada-style’ has been used at different times to denote both a ‘best in class’ and a ‘skinny’ or ‘bare bones’ deal.

The idea seems to be that by re-badging ‘Canada-style’ in this way it discredits the EU’s LPF demands so as to make them seem unreasonable. Being thwarted when ‘all we wanted was a skinny FTA’ sounds like a better reason to be aggrieved than being refused a ‘best in class’ deal, even though it appears to reference the same, or a very similar, thing. And, after all, if it is deemed that with sufficient character the UK can slip the anchors of time and space it is surely a small matter to re-define Canada in whatever image is chosen.

But, however labelled, the immutable facts of history and geography persist, and these inform the EU’s position. So even a ‘skinny’ or ‘bare bones’ deal would be a complex matter, and come with LPF requirements that the UK currently rejects, partly because, as a matter of fact, CETA itself does have some LPF conditions and partly because, as the EU have long made clear, such a deal for the UK would entail more stringent ones. If these irreconcilable positions persist, we will be back to that other linguistic sleight of hand, an ‘Australia-style’ trade deal – more accurately known as no deal.

A test of character

Again, it would be a mistake to think that all this is simply a well-worked out plan to dump the blame on the EU if no deal is done – although that is certainly what will happen in that event. Rather, many Brexiters genuinely believe that the EU will ‘blink’ at the end of the year, making a transition period extension irrelevant because it delays getting to the ‘blink point’. And, again, that makes sense within a world view that regards history and geography and economics as so much flim-flam, and the real issue to be displaying determination and character.

Yet this is of little comfort to the rest of us, since we have a government whose most obvious characteristics are being pumped high on the steroids of Brexiter hubris and being staffed by perhaps the most woefully incompetent politicians in modern British history. Indeed, the two are scarcely unconnected, since the sole criterion for membership of the government is slavish obedience to the most reckless and irrational approach to Brexit. This hardly augurs well if we are reliant upon ‘character’ to see us through.

16 May 19:54

Using WordPress pages as wiki as I’ve been doin...

by Ton Zijlstra

Using WordPress pages as wiki as I’ve been doing in the past months, has drawbacks of course: no versioning on the page (but it is available in the admin back-end), no easy outlining functionality (changing hierarchies, dragging snippets) for which html lists like <ul><li></li></ul> are only an extremely poor substitute. For the pages on this site, this is less problematic. The pages here are more static content I think. I am noticing it more on my personal WP instance, where I write daily.

At the same time I want to avoid going on a hunt for the ‘best’ tool / set-up, which is why I am trying to not get sucked into the ongoing wave of enthusiasm for Roam. This is why WP is useful at this stage: I am using it already, both online and locally.

16 May 19:54

Yesterday afternoon we had our weekly informal ...

by Ton Zijlstra

Yesterday afternoon we had our weekly informal online hangout with the company. This time in honour of my birthday we did a ‘virtual escape room’. The cards with hints were delivered on paper by mail the same day, and then we had to compare notes online. It was fun! Round 1 was asking each other questions from the cards you got, and round 2 was using the hints on the cards to find the password for a website. Took us 4:32.

And I got a t-shirt delivered saying c’est le Ton qui fait la musique 😀

16 May 19:54

Zwei Wochen mit dem Jabra Evolve2 65

by Volker Weber

1663c65bf9877013b42da9f01fede80f

Das Jabra Evolve2 65 war keine Liebe auf den ersten Blick. Seit vielen Jahren benutze ich das Plantronics Voyager Focus UC (links) und an dem muss sich das Evolve2 65 (rechts) messen. Das Focus wirkt auf mich einfach wertiger und deshalb mochte ich das Jabra-Headset zunächst nicht. Mehrmals habe ich auf meinem Schreitisch umgeräumt, aber am Ende hat sich das neue Headset sehr gut geschlagen.

897d07c69edc334341b9b8a0551ca6b1

Das Konzept ist ganz ähnlich. Die weichen Polster schmiegen sich durch die in zwei Richtungen kippenden Ohrmuscheln eng an das Ohr und halten damit schon einige hohe Frequenzen aus dem Gehörgang. Das Focus hat dazu noch ein sanftes ANC, auf das Jabra verzichtet. Normale Hintergrundgeräusche im Büro kriegt man mit beiden unterdrückt, gegen Staubsauger oder spielende Kinder haben sie keine Chance. Man bleibt ansprechbar und das ist Absicht.

afa246a709842021240fb002cf264c8d

Der Mikrofonarm sorgt dafür, dass die eigene Stimme gut aufgezeichnet wird. Beim Focus ist er in beide Richtung drehbar, kann also links und rechts getragen werden. Beim Jabra ist er stets rechts und deshalb ist die Ohrmuschel noch mal mit L bzw. R gekennzeichnet. Das Focus hat die Anrufsteuerung beim Mikrofon, die Wiedergabe auf der anderen Muschel. Beim Jabra dagegen ist alles rechts, weil es das Headset auch ohne "Beiwagen" auf der anderen Seite als monoaurales Headset gibt.

07c200b865a80586b6c87f9fc87f352a

Das Jabra gibt es in verschiedenen Konfigurationen: mit Teams oder UC-Zertifizierung und mit USB-A oder USB-C Bluetooth Dongle. Der Ladeanschluss am Headset ist USB-C. Das Ladekabel hat am anderen Ende einen USB-A-Stecker, genau wie das optionale Dock. Was bei Plantronics BT600 heißt, das nennt Jabra Link 380. Es ist nochmal kleiner als das Konkurrenzprodukt, funkt aber dank Class 1 bis zu 30 Meter weit.

Am Umfang der rechten Ohrmuschel gibt es vier Knöpfe für Start/Stop, Mute, Lauter und Leiser sowie einen Schiebeschalter Off/On/Bluetooth Pairing. Sehr gewöhnungsbedürftig ist die Doppelbelegung von Lauter und Leiser: Hält man den Knopf gedrückt, dann springt die Wiedergabe auf den nächsten oder vorherigen Track. Sehr pfiffig: Wenn man den Mikrofonarm hochklappt, schaltet das Headset auf Mute. Da sieht man dann in einem Videocall auch auf einen Blick, wenn jemand auf Mute ist.

Sehr schön beim Jabra-Headset ist das Busy-Light. Auf beiden Ohrmuscheln ist vorne ein LED-Streifen, der deutlich rot aufleuchtet, wenn man in einem Call ist. Das Jabra hat den längeren Mikrofonarm und zeichnet damit die Stimme klarer auf als das Focus. Dort klingt man durch die aggressivere Nebengeräuschunterdrückung etwas metallischer. Klar verständlich ist man mit beiden Headsets. Für mich sind sie sehr ähnlich und gleichwertig. Das Focus ist etwas älter und hat noch einem MicroUSB-Anschluss. Gegenüber dem alten Evolve 75 aber ist das Evolve2 65 deutlich aufgewertet. Die Akkulaufzeit ist mit 37 Stunden überragend und in nur 15 Minuten hat man es für weitere 8 Stunden aufgeladen.

More >

16 May 19:54

This Word Does Not Exist

Thomas Dimson, May 15, 2020
Icon

Generative AI is the new thing this year. In one of the best efforts I've seen recently, this site creates new words with (more or less) appropriate definitions. Like 'kochinize', which means 'converted (one thing) into another by mixing it with another'. Other recent generative AI projects include Jukebox, "a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles," IDentif.AI, which generates optimal drug combinations, generated worlds and the creatures within them, and many many more.

Web: [Direct Link] [This Post]
16 May 19:54

Will our digital lives leave a fossil record?

by Doc Searls

In the library of Earth’s history, there are missing books, and within books there are missing chapters, written in rock that is now gone. The greatest example of “gone” rock is what John Wesley Powell observed in 1869, on his expedition by boat through the Grand Canyon. Floating down the Colorado river, he saw the canyon’s mile-thick layers of reddish sedimentary rock resting on a basement of gray non-sedimentary rock, the layers of which at an odd angle from everything above. Observing this, he correctly assumed that the upper layers did not continue from the bottom one, because time had clearly passed between the basement rock and the floors of rock above it. He didn’t know how much time, and could hardly guess. The answer turned out to be more than a billion years. The walls of the Grand Canyon say nothing about what happened during that time. Geology calls that nothing an unconformity.

In the decades since Powell made his notes, the same gap has been found all over the world, and is now called the Great Unconformity. Because of that unconformity, geology knows close to nothing about what happened in the world through stretches of time up to 1.6 billion years long.

All of those stretches end abruptly with the Cambrian Explosion, which began about 541 million years ago, when the Cambrian period arrived, and with it an amplitude of history, written in stone.

Many theories attempt to explain what erased such a large span of Earth’s history, but the prevailing paradigm is perhaps best expressed in “Neoproterozoic glacial origin of the Great Unconformity”, published on the last day of 2018 by nine geologists writing for the National Academy of Sciences. Put simply, they blame snow. Lots of it: enough to turn the planet into one giant snowball, informally called Snowball Earth. A more accurate name for this time would be Glacierball Earth, because glaciers, all formed from accumulated snow, apparently covered most or all of Earth’s land during the Great Unconformity—and most or all of the seas as well.

The relevant fact about glaciers is that they don’t sit still. They push immensities of accumulated ice down on landscapes and then spread sideways, pulverizing and scraping against adjacent landscapes, abrading their ways through mountains and across hills and plains like a trowel through wet cement. In this manner, glaciers scraped a vastness of geological history off the Earth’s continents and sideways into ocean basins, so plate tectonics could hide the evidence. (A fact little known outside geology is that nearly all the world’s ocean floors are young: born in spreading centers and killed by subduction under continents or piled up as debris on continental edges here and there. Example: the Bay Area of California is ocean floor that wasn’t subducted into a trench.) As a result, the stories of Earth’s missing history are partly told by younger rock that remembers only that a layer of moving ice had erased pretty much everything other than a signature on its work.

I bring all this up because I see something analogous to Glacierball Earth happening right now, right here, across our new worldwide digital sphere. A snowstorm of bits is falling on the virtual surface of our virtual sphere, which itself is made of bits even more provisional and temporary than the glaciers that once covered the physical Earth. Nearly all of this digital storm, vivid and present at every moment, is doomed to vanish, because it lacks even a glacier’s talent for accumulation.

There is nothing about a bit that lends itself to persistence, other than the media it is written on, if it is written at all. Form follows function, and right now, most digital functions, even those we call “storage”, are temporary. The largest commercial facilities for storing digital goods are what we fittingly call “clouds”. By design, these are built to remember no more of what they once contained than does an empty closet. Stop paying for cloud storage, and away goes your stuff, leaving no fossil imprints. Old hard drives, CDs and DVDs might persist in landfills, but people in the far future may look at a CD or a DVD the way a geologist today looks at Cambrian zircons: as hints of digital activities may have happened during an interval about which otherwise nothing is known. If those fossils speak of what’s happening now at all, it will be of a self-erasing Digital Earth that began in the late 20th century.

This isn’t my theory. It comes from my wife, who has long claimed that future historians will look on our digital age as an invisible one, because it sucks so royally at archiving itself.

Credit where due: the Internet Archive is doing its best to make sure that some stuff will survive. But what will keep that archive alive, when all the media we have for recalling bits—from spinning platters to solid state memory—are volatile by nature?

My own future unconformity is announced by the stack of books on my desk, propping up the laptop on which I am writing. Two of those books are self-published compilations of essays I wrote about technology in the mid-1980s, mostly for publications that are long gone. The originals are on floppy disks that can be read only by PCs and apps of that time, some of which are buried in lower strata of boxes in my garage. I just found a floppy with some of those essays. (It’s the one with a blue edge in the wood case near the right end of the photo above.) If those still retain readable files, I am sure there are ways to recover at least the raw ASCII text. But I’m still betting the paper copies of the books under this laptop will live a lot longer than will the floppies or my mothalled PCs, all of which are likely bricked by decades of un-use.

As for other media, the prospect isn’t any better.

At the base of my video collection is a stratum of VHS videotapes, atop of which are strata of Video8 and Hi8 tapes, and then one of digital stuff burned onto CDs and stored in hard drives, most of which have been disconnected for years. Some of those drives have interfaces and connections no longer supported by any computers being made today. Although I’ve saved machines to play all of them, none I’ve checked still work. One choked to death on a CD I stuck in it. That was a failure that stopped me from making Christmas presents of family memories recorded on old tapes and DVDs. I meant to renew the project sometime before the following Christmas, but that didn’t happen. Next Christmas? Maybe.

Then there are my parents’ 8mm and 16mm movies filmed between the 1930s and the 1960s. In 1989, my sister and I had all of those copied over to VHS tape. We then recorded our mother annotating the tapes onto companion cassette tapes while we all watched the show. I still have the original film in a box somewhere, but I haven’t found any of the tapes. Mom died in 2003 at age 90, so her whole generation is now gone.

The base stratum of my audio past is a few dozen open reel tapes recorded in the 1950s and 1960s. Above those are cassette and micro-cassete tapes, plus many Sony MiniDisks recorded in ATRAC, a proprietary compression algorithm now used by nobody, including Sony. Although I do have ways to play some (but not all) of those, I’m cautious about converting any of them to digital formats (Ogg, MPEG or whatever), because all digital storage media are likely to become obsolete, dead, or both—as will formats, algorithms and codecs. Already I have dozens of dead external hard drives in boxes and drawers. And, since no commercial cloud service is committed to digital preservation in perpetuity in the absence of payment, my files saved in clouds are sure to be flushed after neither my heirs nor I continue paying for their preservation.

Same goes for my photographs. My old photographs are stored in boxes and albums of photos, negatives and Kodak slide carousels. My digital photographs are spread across a mess of duplicated back-up drives totaling many terabytes, plus a handful of CDs. About 60,000 photos are exposed to the world on Flickr’s cloud, where I maintain two Pro accounts (here and here) for $50/year a piece. More are in the Berkman Klein Center’s pro account (here) and Linux Journal‘s (here). It is unclear currently whether any of that will survive after any of those entities stop paying the yearly fee. SmugMug, which now owns Flickr, has said some encouraging things about photos such as mine, all of which are Creative Commons-licensed to encourage re-use. But, as Geoffrey West tells us, companies are mortal. All of them die.

As for my digital works as a whole (or anybody’s), there is great promise in what the Internet Archive and Wikimedia Commons do, but there is no guarantee that either will last for decades more, much less for centuries or millennia. And neither are able to archive everything that matters (much as they might like to).

It should also be sobering to recognize that nobody owns a domain on the internet. All those “sites” with “domains” at “locations” and “addresses” are rented. We pay a sum to a registrar for the right to use a domain name for a finite period of time. There are no permanent domain names or IP addresses. In the digital world, finitude rules.

So the historic progression I see, and try to illustrate in the photo at the beginning of this post, is from hard physical records through digital ones we hold for ourselves, and then up into clouds that go away. Everything digital is snow falling and disappearing on the waters of time.

Will there ever be a way to save for the very long term what we ironically call our digital “assets?” I mean, for more than a few dozen years? Or is all of it doomed by its own nature to disappear, leaving little more evidence of its passage than a Digital Unconformity, when everything was forgotten?

I can’t think of any technical questions more serious than those two.


The original version of this post appeared in the March 2019 issue of Linux Journal.

16 May 19:53

What are the right Pedestrian Call Buttons for the Pandemic?

by peter@rukavina.net (Peter Rukavina)

The City of Charlottetown is in the final stages of rebuilding the traffic signalling at two downtown intersections, Kent & Great George and Kent & Prince. The new infrastructure at both intersections includes pedestrian call buttons, and these are of the “jam your finger into a rubber nobbly bit to activate a button underneath” type that look like this:

New pedestrian call buttons in Charlottetown

At other intersections in the city, rebuilt earlier, there’s a different kind of pedestrian call button, a large silver disk. The advantage of this type over the newer ones is that you don’t need to jam your finger into anything and, indeed, you can press the button with your wrist, your arm, your hip, your cane, or anything else. So you don’t actually need to touch anything directly:

Older style pedestrian call button at Great George and Euston

While I like the fact that I don’t need to touch this kind of button, it also doesn’t have a lot of “travel,” meaning that most of the time I’m not 100% I’ve actually activated it: although in most cases there is an LED and and sound to provide feedback, I often remain unsure of whether my press has “taken.”

There’s a rarer third type of call button, the Guardian APS, installed at the corner of Prince and Euston when it was rebuilt some years ago. Confoundingly, these call buttons were disabled shortly after installation and have remained installed but non-functional in all the years since. While this model of button has laudable accessibility features, including audio and haptic feedback, and a design that reinforces, visually and by touch, the direction the signal activates, like the rubber-nobbly signals, they also require contact, and cannot be activated without it.

Accessible call button at Prince and Euston.

Which are the right buttons for the pandemic? Which are the right ones for accessibility? Is there a model that works for both?

16 May 19:53

HSTM20 Unconference, the Pandemic Edition

by Ton Zijlstra

Today we joined the HSTM20 Unconference, organised by our friend Oliver with logistics support from Peter, who live on Prince Edward Island in Canada. HSTM stands for Home Stuff That Matters, that last bit is a nod to our STM birthday unconferences, so this is as Peter said today, another branch on the evolving tree of unconference events.

The Home, in Home Stuff That Matters points to us all being home due to the pandemic, and to the two questions we discussed. What have you learned from the pandemic that you want to keep for the future? What do you like about the place where you live?

We were over 25 people, from around the world, across ten time zones, so from morning coffee time to end of afternoon, and evening. It was a nice mix of familiar faces and new ones, spending two hours in conversation. It was good to see dear friends, as well as meeting people again we first met last year when we visited Peter, Catherine and Oliver on PEI for a face to face unconference.

The event also showed how well Zoom works. With over 25 participants from literally around the world, with a wide variety of bandwith and tech savviness it worked without issue, splitting up from a plenary into multiple groups and rejoining into a plenary. It’s in a different class than other tools I’ve been using, even with its dubious information ethics.

Regrouping ourselves as Oliver’s tribe this time, it was an excellent way to kick-off our weekend.

Part of Oliver’s tribe in conversation today

16 May 19:53

Andrew Sullivan adds some plague perspective: “Get a Grip”

by Gordon Price

Sullivan, a columnist with New York magazine (and an early blogger – one of the best before it became too great a burden), provides some helpful perspective for our time by comparing it to year of the London plague (one year before the Great Fire):

Historians now rank the 1665 plague as the worst of that century (though much less severe than the Black Death of 1348). By September, as it peaked, there were 7,000 deaths a week. In COVID-19, the fatality rate is around one percent. In London in 1665, in a matter of seven months, around a quarter of the population perished. The number is vague because so many records were destroyed by the Great Fire of London, which broke out a year later. But it’s still staggering. A rough equivalent today would be 4 million deaths in the New York City metro area this year alone — with no real medical care, and people dropping dead on the streets.

Now imagine that after the deaths of those 4 million, much of Manhattan were to be burned to the ground by a massive and uncontrollable fire. That’s what Londoners had to handle in just two years: a pandemic of far greater scope than ours, and a conflagration that amounted to 9/11 several times over. And it was not the end of the world.

In fact, in just a couple of years, the population of the city had rebounded. The massive fire had killed much of the rodent population that had been spreading the fleas behind the plague. London was rebuilt, stone replaced wood, and Christopher Wren was brought in to design and replace the old Saint Paul’s Cathedral and over a dozen other landmarks of the city to this day.

What must have felt like an apocalypse of plague and fire became, with astonishing speed, a new city, forged anew by communal trauma, and soon to be the most powerful capital in the world. And somehow, Pepys lived through all of it, face-to-face with death, and never stopped living, maintaining a stoic cheerfulness and humor throughout. And today, in the richest country on Earth, with medical technology beyond Pepys’s wildest imagination, and a plague killing a tiny fraction of the population, some are wielding weapons in public to protest being asked to stay at home for a few more weeks and keep a social distance. Please. Get a grip.

Full column here.

16 May 19:53

The Longest Time~Quarantine Edition

by Sandy James Planner

Phoenix-Ensemble-8-cropped-1

Phoenix-Ensemble-8-cropped-1

Via Portland’s Tim Davis, here’s Vancouver’s Phoenix Chamber Choir putting to song what everyone is feeling. So very well done. As Tim says ”

“One of my favorite performances of 2020 thus far is by a chamber choir based in Vancouver, BC. Here’s their hilarious “quarantine edition” of Billy Joel’s “Longest Time”Phoenix Chamber Choir (https://phoenixchoir.com/about) has tons of top prizes in the annual CBC National Choral Competition. I’ll have to see them perform in Vancouver sometime!”

16 May 19:53

Twitter Favorites: [knguyen] people who unequivocally love books have never packed for a move

Kevin Nguyễn @knguyen
people who unequivocally love books have never packed for a move
16 May 19:52

Prediction: Vancouver RE Prices Will Not Crash… Unless They Crash

by vreaa
mkalus shared this story from Vancouver Real Estate Anecdote Archive.

“If homeowners can just hold off selling, the Canadian housing market will emerge fine from its current “deep freeze”.
According to a recent TD Economics housing forecast update, the market is expected to gradually recover from the effects of the COVID-19 pandemic.
After an anticipated “historic” plunge in sales in the month of April 2020, a “much stronger activity” is seen next year.
A lot of that depends on whether homeowners can avoid distressed selling during this pandemic.
“Absolutely key to our forecasts is the assumption that listings mirror sales by dropping substantially in the near-term and recovering gradually thereafter,” Rishi Sondhi, an economist with TD Economics, wrote.
By holding off on selling, homeowners can do one thing for the market.
“This puts a floor on prices and sustains relatively tight-supply demand balances across most markets, allowing for the resumption of positive price growth as provincial economies are re-opened,” Sondhi explained.”
– excerpt from ‘Homeowners avoiding distressed selling key to Canadian housing market recovery: TD Economics’, Carlito Pablo, 1 May 2020, Georgia Straight

No, folks, that ‘analysis’ is not from ‘The Onion’.
Seems like the TD analysts have found a sure fire way of maintaining every bull market, forever… (it’s easy: just get sellers not to sell).
It’s remarkable that this kind of ‘analysis’ can get parroted on & on without getting called out.
Remember: Sellers aren’t competing with Buyers, they’re competing with other Sellers. How many Vancouver RE speculators (essentially each and every buyer for the last 10-15 years) are going to realize their thus-far-paper profits? We are already seeing many anecdotal examples of people who bought in 2016 or later taking losses on resales.
– vreaa



16 May 19:46

Presenting Presence in Video Calls

by Ton Zijlstra

Two things to read and think about on presenting and being present in video talks and webinars. One by Matt Web, Rethinking Video Talks, (found via Peter Rukavina), on how his usual careful design of switching between slides and himself, creating rhythm and building up sequences. One by Bryan Alexander on how to ensure webinars aren’t bad and boring, the first of a series of upcoming postings.

16 May 03:22

David Smith’s watchOS 7 Wishlist

by Ryan Christoffel

David Smith, developer of Watchsmith and a host of other Apple Watch apps, shared his watchOS 7 wishlist today. With his pedigree, there’s no one I trust more to make a thoughtful, realistic, well-informed list of requests for watchOS than Smith. For example, here’s an excerpt of his introduction:

I am fully aware of the constraints of the Apple Watch. I’ve spent the last 6 months pushing the limits of what is possible for it and have seen all the corners of its use, where it completely falls apart.

Nearly every one of these ideas or features involves a tradeoff. Either between battery life and capability or between complexity and intuitiveness. I suspect Apple’s own internal list of ideas and possibilities far outstrips my own. The reason they haven’t built a feature yet isn’t because they haven’t thought about it.

Instead it is quite the opposite. They have chosen explicitly to not do it yet. This is the tricky calculus involved in evolving a platform. If they push too fast, too soon on the capability side then they may end up destroying the battery life of the device. Or if they add too many features then they might end up with a jumbled mess that users can’t understand.

I don’t envy the leadership that has to sit down and make the hard calls of what to do, when.

Some of the features he mentions that are at the top of my own list include rest days for activity tracking, true independence, and multiple complications. The full list is well worth exploring, and offers valuable insight into what we might see revealed next month.

→ Source: david-smith.org

16 May 03:19

Rethinking conference talks for video calls

I’ve done two Zoom talks this week. I love public speaking and I aspire to be at least “not shit” in this new medium. So, some work in progress thoughts…

As the speaker, I need to see faces

Donkeys years ago Danny O’Brien said to me that the purpose of asking cheesy audience-participation questions like raise your hand if you’ve ever etc is not to connect with the audience…

Sometimes, when I’m doing a talk, I start to disconnect – it’s like I can suddenly hear my own voice. I lose my mental overview and ramble into the weeds. At worst, I freeze completely (that’s happened once). It’s at that moment that I remember Danny saying that speaking on a stage, just talking without anybody to speak with, is psychologically weird and unnatural, and you need a way to kid yourself it’s a normal conversation. So that’s when to ask the audience a question, which I then do, and it’s enough to fool my brain so we get back on track.

Ok so Zoom is terrible for this.

Because everyone’s on mute, there’s not even the feedback of ambient noise.

So, at a minimum, I think audience cameras should stay switched on. I want to see people’s faces – ideally in gallery view (that grid of thumbnails).

There’s a risk that someone will look distracted, and that’s off-putting, but I regard that as a separate problem. See below.

How about designing slides for thumbnail view?

In real life talks, audience attention bounces between the speaker and the slides. In a way, there are two characters on stage. You can play games with that.

One gag I’ve used a few times builds up over 6 slides, and I bring up each slide and read out the words, training the audience to look at the text. Then on the last slide, which is funny because it undermines me slightly, I don’t say anything, but by habit everyone reads the punchline and encounters the gag for themselves. That makes it funnier.

So you can deliberately push attention around. You can turn around and face the slides, directing attention towards them and and placing yourself with the audience. You can use a series of visually complex slides, followed by a very plain one, which suddenly leaves all that captures attention with nothing to alight on except you and your next statement, and it works like a spotlight. It can be very effective.

BUT, generally speaking, audience members look sometimes at the speaker and sometimes at the slides, and they’re entirely in charge of how and when they do that, and it’s a really good mechanism to avoid getting bored.

And they can’t do this on Zoom.

The problem with Zoom and screen sharing is that the slides take up the whole damn view and you, the speaker, get relegated to a thumbnail. (On other platforms it’s worse: you see the slides, and the speaker becomes a disembodied voice.)

The slide dominates the speaker. So

  • either the slide is information-rich, in which case a listener will look at the slide and ignore the speaker,
  • or the slide is basic (for example, a single clear statement) in which case it still grabs their attention, because it’s massive, but the listener has to stare at something boring while you make the real point.

For me, the best approach would be no screen sharing.

Here’s what I want to try:

The slides and speaker should appear in separate thumbnails. Audience members should be encouraged to click at will between the two, sometimes looking at one, and sometimes the other. Their choice.

Unfortunately I can’t figure out how to do this – as far as I can tell, screen sharing on Zoom will cause the slides to always occupy the full screen view for all call participants. Thoughts welcome.

Slides for orientation

I’m accustomed to using slides for pace and rhythm. Sure, individual slides can be for information, illustration, a counterpoint, to provide a section break, and so on. But with a sequence of slides, that’s how you control pace.

A talk needs slow expansive sections plus also rapid-fire bits to rattle through – the variety of pace keeps it interesting!

For some reason, this doesn’t work on Zoom. Maybe because, in the end, a computer screen is pretty small compared to the room the viewer is sitting in? So a sequence of slides doesn’t have enough impact to effectively set pace?

Actually it’s almost like there’s no pace at all. I feel like online talks have a kind of timelessness. The speaker drones on, the slides tick over, when will it all end…

So I’m trying to figure out ways where each slide can communicate where we are in the talk, at a glance.

Perhaps, if a talk were to have three sections, each section is given a different background colour? Headers and footers aren’t visible when you’re designing for thumbnail view.

Massive numbers maybe? One talk I did this week was organised around 8 tips (numbered), and two breaks which I announced at the beginning. This structure was given at the head of the talk. Each break had a suggested topic to think about, and then I took general questions from the text chat before resuming. (Of course these are also opportunities for people to check their email, which relieves some of that distracting tension.)

I think there might be something in this approach.

Background matters less than lighting and position

Camera at eye level. Nobody wants to look up my nose.

Close but not too close. The ideal distance is so that the top of my head is sometimes cropped out (if I’m leaning in), but my hand gestures should always be visible (hands at belly height and up).

I use two light sources: room lighting for overall brightness, and a bright lamp on one side (not pointed directly at me) to provide texture and shadow across my face. I’ve done calls where there’s a light source like a window behind me. Never again.

My background is neutral grey. For calls it doesn’t have to be, background is character, like choosing what top you wear (though your face should always be better lit than the background, to draw focus). But for talks, I think it’s important to stand out, and that means a plain background.

An opportunity to re-think how talks work

I like talks. I like hearing engaging speakers, and I like talks dense with ideas – whether these are unique and hard-won insights, or weird anecdotes.

I’ve seen some speakers who can hold my attention and the whole audience with no slides at all. I love it. I’m unable do speeches myself, but I love a good one.

My favourite talks are the ones that last 45+ minutes, especially the ones where the speaker has notes. There’s room to travel along with the speaker to somewhere new. 45 minutes is enough to build new perspectives.

When TED came along, and TEDx, suddenly all talks everywhere had to be 18 minutes long. I don’t like 18 minute talks. There’s only room for one idea. Especially because the speaker is denied notes.

I like live talks. I don’t get along with recorded talks – I don’t find them fun to watch, and I don’t like the idea of recording without an audience. I want to see the whites of their eyes. Mind you, I don’t really get on with TV or YouTube either, so perhaps it’s just me.

Anyway.

I find the idea of Zoom talks fascinating. What does it means to do something

  • which is live
  • where everyone in the audience is potentially multitasking
  • that includes a text chat backchannel which is visible to everyone?

Could talks get longer again, because tuning in isn’t such a commitment – the audience can be present but also checking their email?

But with new elements, and new approaches to structure, and new approaches to performance and interactivity?

No idea. Working on it.

16 May 00:53

Treeherder developer ergonomics

by Armen Zambrano

In the last few months I’ve worked with contributors who wanted to be selected to work on Treeherder during this year’s Google Summer of Code. The initial proposal was to improve various Treeherder developer ergonomics (read: make Treeherder development easier). I’ve had three very active contributors that have helped to make a big difference (in alphabetical order): Shubham, Shubhank and Suyash.

In this post I would like to thank them publicly for all the work they have accomplished as well as list some of what has been accomplished. There’s also listed some work from Kyle who tackled the initial work of allowing normal Python development outside of Docker (more about this later).

After all, I won’t be participating in GSoC due to burn-out and because this project is mostly completed (thanks to our contributors!). Nevertheless, two of the contributors managed to get selected to help with Treeherder (Suyash) and Firefox Accounts (Shubham) for GSoC. Congratulations!

Some of the developer ergonomics improvements that landed this year are:

  • Support running Treeherder & tests outside of Docker. Thanks to Kyle we can now set up a Python virtualenv outside of Docker and interact with all dependent services (mysql, redis and rabbitmq). This is incredibly useful to run tests and the backend code outside of Docker and to help your IDE install all Python packages in order to better analyze and integrate with your code (e.g., add breakpoints from your IDE). See PR here.
  • Support manual ingestion of data. Before, you could only ingest data when you would set up the Pulse ingestion. This mean that you could only ingest real-time data (and all of it!) and you could not ingest data from the past. Now, you can ingest pushes, tasks and even Github PRs. See documentation.
  • Add pre-commit hooks to catch linting issues. Prior to this, linting issues would require you to remember to run a script with all the linters or Travis to let you know. You can now get the linters to execute automatically on modified files (instead of all files in the repo), shortening the linting-feedback cycle. See hooks in pre-commit file
  • Use Poetry to generate the docs. Serving locally the Treeherder docs is now as simple as running “poetry install && poetry run mkdocs serve.” No more spinning up Docker containers or creating and activating virtualenvs. We also get to introduce Poetry as a modern dependency and virtualenv manager. See code in pyproject.toml file
  • Automatic syntax formatting. The black pre-commit hook now formats files that the developer touches. No need to fix the syntax after Travis fails with linting issues.
  • Ability to run the same tests as Travis locally. In order to reduce differences between what Travis tests remotely and what we test locally, we introduced tox. The Travis code simplifies, the tox code can even automate starting the Docker containers and it removed a bash script that was trying to do what tox does (Windows users cannot execute bash scripts).
  • Share Pulse credentials with random queue names. In the past we required users to set up an account with Pulse Guardian and generate their own PULSE_URL in order to ingest data. Last year, Dustin gave me the idea that we can share Pulse credentials; however, each consumer must ingest from dynamically generated queue names. This was initially added to support Heroku Review Apps, however, this works as well for local consumers. This means that a developer ingesting data would not be taking away Pulse messages from the queue of another developer.
  • Automatically delete Pulse queues. Since we started using shared credentials with random queue names, every time a developer started ingesting data locally it would leave some queues behind in Pulse. When the local consumers stopped, these queues would overgrow and send my team and I alerts about it. With this change, the queues would automatically be destroyed when the consumers ceased to consume.
  • Docker set up to automatically ingest data. This is useful since ingesting data locally required various steps in order to make it work. Now, the Docker set up ingests data without manual intervention.
  • Use pip-compile to generate requirement files with hashes. Before, when we needed to update or add a Python package, we also had to add the hashes manually. With pip-compile, we can generate the requirement files with all hashes and subdepencies automatically. You can see the documentation here.

There’s many more changes that got fixed by our contributors, however, I won’t cover all of them. You can see the complete list in here.

Thank you for reading this far and thanks again to our contributors for making development on Treeherder easier!

15 May 03:17

MacBook Air (2020) Review: Apple’s laptop for everyone

by Patrick O'Rourke
MacBook Air (2020)

Now that I’ve used Apple’s new MacBook Air (2020) consistently for the last few weeks, I can confidently say that I’d recommend it as the tech giant’s laptop for the average MacBook user.

It’s sleek, light, almost powerful enough to do everything I need thanks to a worthwhile processor upgrade, and most importantly, the laptop has ditched Apple’s beleaguered Butterfly mechanism in favour of the excellent new Magic Keyboard.

All that said, I ran into significant thermal throttling issues when trying to run several Adobe apps simultaneously while connected to a 4K display — because of this, shelling out significantly more cash for the 10th Gen Intel i5 Macbook Pro (2020) might be a better option for some people in the creative space.

MacBook Air (2020) top down

However, the audience Apple is going after with the new MacBook Air likely won’t be connecting ultra-high-resolution screens to the laptop or, in most cases, running resource-intensive apps like me. In several ways, the new Air is a true return to form for Apple’s laptop for the average MacBook user, save for a few issues.

Regardless of what configuration you opt for, the new MacBook Air is likely sufficiently powerful for most people — you just need to be aware that you can’t push its hardware quite as far as you should be able to on paper.

MacBook Air (2020) on white table

Technical specifications

  • 1.1GHz Intel Quad-Core i5 Processor with Turbo Boost up to 3.5GHz
  • 512 GB Storage
  • Touch ID
  • Retina display with True Tone
  • 1.1GHz quad-core 10th-generation Intel Core i5 processor
  • Turbo Boost up to 3.5GHz
  • Intel Iris Plus Graphics
  • 8GB 3733MHz LPDDR4X memory
  • 512GB SSD storage
  • Magic Keyboard
  • Touch ID
  • Force Touch trackpad
  • Two Thunderbolt 3 ports
  • Height: 0.41–1.56cm (0.16–0.61 inch)
    Width: 30.41cm (11.97 inches)
    Depth: 21.24cm (8.36 inches)
    Weight: 1.25kg (2.75 pounds)

Here’s what’s new under the hood

MacBook Air (2020) keyboard

Apple has finally killed off the Butterfly keyboard mechanism with the release of its new MacBook Air and MacBook Pro. While I don’t fall into the camp of Mac users that hated the Butterfly keyboard’s low-travel keys, the underlying mechanism suffered significant issues over the years that resulted in dead and unresponsive keys.

Any way you look at it, an unreliable keyboard is unacceptable, especially for a laptop that costs as much as the MacBook Air. If you’ve used Apple’s Magic keyboard before, you know what to expect here. The new scissor switch Magic keyboard feels snappy and responsive, and likely will satisfy anyone who hated the Butterfly mechanism.

The other major upgrade this year is Intel’s 10th Gen processors, starting with the 1.1GHz dual-core i3 processor for $1,299 CAD and going all the way up to a 1.1Ghz quad-core i5 for $1,699. I’d argue that most people should opt to spend a few hundred more dollars and buy the latter Air configuration given the additional power boost it offers.

MacBook Air (2020) on an angle

To be clear, these are still lower-power Y-series Ice Lake chips and not the U-series processors featured in Apple’s higher-end MacBook Pro line. There are comparable and lower-priced Windows 10 laptops that offer 10th-generation U-series power, so it’s disappointing that Apple has opted to stick with Y-Series processors for the 2020 MacBook Air just like it did with the 2018 and 2019 version of the laptop.

Since the new Air features Intel’s 10th Gen processors now, it also includes the chip maker’s more powerful Iris Plus integrated graphics, allowing the laptop to run a 6K display. The odds of a MacBook Air user actually owning a 6K display are pretty minimal, though it’s a nice feature to have.

MacBook Air (2020) side

Keep in mind, however, that when using apps like Adobe Lightroom CC, Photoshop CC and Audition CC, the Core i5 Air I’m using struggled when connected to my Benq EW3280U 4K display. Anyone looking for a laptop powerful enough to handle tasks like this while connected to a 4K external monitor will likely want to opt for the new MacBook Pro. On the other hand, the Air was able to handle running these apps relatively easily when not connected to an external display, though I experienced a few bouts of lag when editing multiple pictures at once in Photoshop CC and Lightroom CC.

Be prepared to hear the new Air’s fan spin-up a lot, even when merely running an app like Google Chrome. For example, while writing this review, I have Lightroom running and several Chrome windows open. The MacBook Air’s fans are whizzing away and I can feel the bottom of the laptop slowly heating up.

I don’t find this to be an issue at all. Still, the fans can get pretty loud, so if you’re like MobileSyrup staff writer Bradley Bennett and hold unreasonable hate for laptops with fans (RIP OG MacBook), you aren’t going to be pleased with the way the new Air handles thermal throttling.

Though I talked about this at length in my first look story focused on the new Air, it’s worth mentioning again. The new MacBook Air’s 10th Gen i5 processor benchmarks higher than my 2017 MacBook Pro with a dual-core 7th Gen Intel i7 processor  — 2,294 for multi-core, compared to 2,128 for multi-core, respectively — the lag I encounter when running a dual monitor setup with Apple’s MacBook Air (2020) was perplexing. On paper, the additional cores should result in the laptop not having issues with these apps, but in practice, there are situations where it struggles. Regarding the i5 10th-Gen MacBook Pro (2020), the laptop hits 1,188 for its single-core score and 4,347 for multi-core, making it significantly more powerful than the MacBook Air (2020), as expected.

This, again, is the Air’s aggressive thermal throttling in action, limiting the processor’s maximum power to manage the heat it’s emitting by taking advantage of Intel’s turbo boost feature that ramps the processor up to 3.2Ghz when neccessary and then moves it back down to 1.1Ghz.

MacBook Air (2020) on its side

To be fair, the audience the Air is targeting probably won’t be pushing the laptop to its limits, but if you’re thinking about replacing an older MacBook Pro with this update to Apple’s lightweight laptop, the thermal throttling is something to consider.

It’s important to note that the Air can now be configured with a quad-core processor and up to 16GB of RAM, though only the previous generation 2133Mhz LPDDR3 ram, unlike the new MacBook Pro (the configuration I used for this review has 8GB of RAM). Apple also increased the laptop’s minimum storage to 256GB from 128GB, which is a welcome move on the tech giant’s part and likely should have happened a few years ago.

MacBook Air (2020) Touch ID sensor

The new Air features a snappy and responsive Touch ID fingerprint sensor, Apple’s T2 Security Chip, and there’s no Touch Bar just like the previous version of the MacBook Air. I didn’t miss this feature at all since I haven’t found the Touch Bar very useful.

The only other notable change over last year’s MacBook Air is improved stereo sound that delivers 25 percent more volume and twice the bass, according to Apple. While the speakers do indeed sound better, in my experience, the jump in quality isn’t as noticeable as it was from the 15-inch MacBook Pro to the newer 16-inch version of the laptop.

It’s not totally perfect

MacBook Air (2020) keyboard

Keep in mind that while the new MacBook Air is a true return to form for Apple’s popular laptop line, it still has issues.

For one, in my experience battery life comes in at a disappointing five to six hours depending on what I was running on the laptop. Of course, some of this stems from the fact that I still use Chrome OS, a notorious battery life and resource hog. That said, just like the MacBook Pro (2020), it would be great to see Apple push the battery life expected from its laptops just like it did years ago. I shouldn’t need to be effectively limited to using Apple’s own apps to experience improved battery life because that’s not how everyone will use the laptop. Of course, some of the blame regarding this issue is Google’s fault.

You could argue that in the face of sleek looking Windows 10 laptops, the Air can look a little dated, especially when it comes to its small, but still visible display bezels. I don’t hold that opinion given the most recent MacBook Air refresh only launched back in 2018, but it would have been nice to see Apple add some sort of aesthetic flourish to its look. The new Air is also still just as sleek and lightweight as it’s 2019 predecessor, coming in at 1.25kg and 0.41 to 1.56cm, respectively.

MacBook Air USB-C port

Thankfully its 2560 x 1600 pixel resolution display looks just as great as the MacBook Pro’s too, complete with ‘True Tone,’ Apple’s ambient light adapting display technology. The screen doesn’t feature a P3 wide colour gamut like the MacBook Pro, though I didn’t notice a significant difference in vibrancy. The laptop also doesn’t include Wi-Fi 6, but given few devices even support the new technology I don’t see this as an issue.

Unfortunately, the Air’s webcam is still underwhelming. It’s okay, but given how essential webcams are in the current world we live in, it would have been nice to see Apple improve it at least a little bit. Finally, there are only two USB-C Thunderbolt 3 ports on the left-hand side of the laptop. At the very least, I would have liked to see these ports split up with one on either side of the MacBook Air (2020). Ideally, adding an extra port on the other side would have been great. To use the new Air with my external monitor, I was forced to reconfigure my whole setup given the USB-C ports are located on the left side of the laptop, and my monitor’s cord wasn’t quite long enough to reach the Air’s ports.

MacBook Air (2020) display

The post MacBook Air (2020) Review: Apple’s laptop for everyone appeared first on MobileSyrup.

15 May 03:13

How the Coronavirus Will Change Faculty Life Forever

Bryan Alexander, Chronicle of Higher Education, May 14, 2020
Icon

More than anyone else Bryan Alexander has been tracking the impact of the pandemic on U.S. higher education, from the pivot to online learning to the layoffs to the institutional mergers and closures. This article looks at the impact on faculty members, as there are greater pressures to support students and generate revenue, as well as on universities, as things like building maintenance falls behind and student life in 2021 looks very different from 2019. In typical fashion, though, Alexander predicts a light at the end of the tunnel. "After all, the tenure-track triad of teaching, research, and service still exists. So do familiar professional disciplines, and faculty members get to devote their days to their fields, if with some adjustments in class assignments and research support." I wonder.

Web: [Direct Link] [This Post]
15 May 03:13

Dithering and Open Versus Free

Ben Thompson, Stratechery, May 14, 2020
Icon

I'm including this post not to promote a new paid podcast (though that's what the article does) but rather to point again to what's happening in the world of podcasts. Back in the days when Google still allowed RSS to exist you could subscribe to many podcasts from many providers, using the podcatcher of your choice. Now, though, they're largely being streamed through a single service owned by Spotify. Spotify is obviously trying to do to podcasting what Google and Facebook did to news, that is, to become the only provider. This new podcast bucks the trend (as does my own, though I'm not what anyone would call a regular podcaster). As regards this for-pay podcast, well it's the same as for news: it will reflect the interests and values of those with the money to pay. Charging money does not create equity of access and/or content, which is why subscription content is not free content.

Web: [Direct Link] [This Post]
15 May 03:13

Slack on mobile is getting a redesign that will make it easier to use the app

by Dennis Price

Popular business communication platform Slack is getting a similar redesign on mobile to what the desktop version of the app received back in March.

The update will remove extra clutter for iOS and Android users, according to Slack. It will also make it easier to get to other areas of the app.

The home screen now features tabs that let you quickly access your direct messages, mentions, your status and notification settings.

Slack has also made improvements to users’ work-life balance by making it easier to respond to off-hours requests and to snooze alerts entirely.

In addition, the compose button is coming to mobile users, as well as the call button for when text isn’t going to cut it.

The update is rolling out this week, says Slack, so it should appear in the iOS and Android app store soon.

Source: Slack Via: Engadget

The post Slack on mobile is getting a redesign that will make it easier to use the app appeared first on MobileSyrup.

15 May 03:12

How to prepare for losing your programming job

Another week has passed, and another 3 million people in the US have filed for unemployment. While the current situation hasn’t impacted programming jobs quite as much, it’s just a matter of time before the economic damage hits most everywhere. There will be layoffs, and plenty of them, and occasionally whole companies shutting down.

So even if your job is secure now, you might still lose it in the future. How can you prepare? What can you do to reduce your future risks?

The first thing you need to do is come up with a plan, which is what this article is all about. In particular, you will want to:

  • Try to make sure you have the necessary financial resources.
  • Make your future job hunt easier, by building a network, making sure your skills are up-to-date, and making sure you have visible public proof of your skills.
  • Come up with a series of fallback plans if things don’t go well.

Let’s go over these one-by-one.

Money in the bank

If you lose your job, you lose your paycheck—but you still have to pay your bills. And after the dot-com bust, the last big tech recession, it took years for all the jobs to come back

If you have at least six months of living expenses in cash, that’s a good start. If not, it’s best to think about how to get there.

There are two sides to this:

  1. If possible, you need to cut your expenses, which will both allow you to save and reduce how much money you need for each unsalaried month. See this more detailed article.
  2. Ensuring your financial assets, if you have any, aren’t correlated with your job.
    1. If you own stock in your own company, you are making a double bet: if the company goes down, you will lose money and your job.
    2. If you work for a startup that needs to raise money soon, a crashing stock market will also greatly reduce the viability of your current job.
    3. More broadly, if you own stocks and to a lesser extent corporate bonds, how correlated are they with your ability to keep a job?
    4. Even more broadly, how much of your net worth is tied to the tech industry, or the economy as a whole?

In short, you want cash on hand, and plenty of it.

Making your future job hunt easier

Searching for a job will be much easier if you:

  • Know lots of people.
  • Have useful skills.
  • Can visibly demonstrate you have those skills.

Let’s cover those one by one.

Knowing lots of people

Applying for a job by sending in your resume is the hardest way to get hired. It’s much easier if you know someone who can vouch for you, can get you past the initial screen, or can fill you in on what the hiring manager really wants.

So the more people you know, the better off you are. Elsewhere I have a guest post about (social) networking, but that can take time and is harder during a pandemic. But there are still a few easy things you can do in the short term:

  • Join a public Slack or two for the technology area you specialize in. You can help answer people’s questions, see when people mention they’re hiring, and more broadly get a better sense of the zeitgeist, which is useful for building your skills (see below).
  • Keep the contact info for former co-workers. This can be done via LinkedIn, for example, and often there will be an ex-employee Slack. If there isn’t one, you can start it—especially if your company is having initial rounds of layoffs. This too can often be very educational, as former employees might be more forthcoming.
  • Find ways to help other people. Can you teach useful skills? Join a local mutual aid organization?

Build useful skills

If you’ve been working at the same job for a while, it’s easy for your technical skills to get a little stale. Unless you’re working at the right place, hang out with the right people, or do the right things, you might not be aware of the latest technology, or you might be using out-of-date practices.

So you’ll want to update your skills a little. As always, doing this extensively outside of your job may not be possible, so try to:

  1. Spend an hour a week, ideally during work hours, getting up-to-date on the latest technologies. The goal here is breadth, not depth: sign up for a newsletter for your technology stack (he’s a partial list), skim the topics at a relevant conference, maybe watch a talk or two. I cover learning for breadth here, but the basic idea is that knowing a tool exists and what it does can take very little time, and is quite valuable on its own: both on the job, but also in interviews (“I haven’t used it myself, but I believe tool X is how you would solve this”).
  2. Try to learn more technologies on the job, because that is the best place to do so.

Create visible proof of skills

Having skills is one thing, proving you have them is another. It is therefore quite useful during a job hunt to have some visible, public proof you have these skills. For example:

Open source: When I moved to the US in my 20s, my work on an open source project made it much easier for me to get job interviews, and eventually job offers. It wasn’t just that my resume said that I knew computer networking, I could point to a publicly available project used by real people and say “I worked on that”.

Even if you share code that isn’t widely used, it can still be useful as proof of skill.

Conference talks: Speaking publicly about a particular skill, technology, or project is a great way to get public proof of skills. With conferences moving online, speaking at conferences is now much easier. You don’t have to travel or pay for you travel, and you don’t have to get approval from your manager to lose work. If there’s a topic where you know enough to help someone else, look for conferences on the topic and submit a proposal.

Blogging: Have something to share, or learning something new? Write it down and share it publicly. Writing well is an immensely useful skill in general, so this will also count as improving your skills. You can write for your own blog, or you can propose a blog post on your company’s tech blog, if they have one.

Fallback plans

In an ideal world you would lose your job, start a job search, find a new job within a month, and everything will be fine. Sadly we don’t always live in an ideal world.

So if you live in a country like the US that a shitty social net it’s worth coming up with a series of fallback plans, if only for your own peace of mind.

For example, how can you make your money last longer?

  1. As soon as you lose your job, apply for unemployment.
  2. Cut additional costs.
  3. If time stretches on and you still don’t have a job, figure out ways to reduce housing costs. Are you young and have the ability to move back in with your parents? Have more room than you need and the option of adding roommates? All in a pandemic-safe way, of course.
  4. Any ways you can make money some other way, if it’s really taking too long?

If you can’t find a job immediately, you will have probably have more time to upgrade your skills.

  1. Which skills are worth working on?
  2. What’s the best way to improve them?

You’ll also want to meet more people who can help you find a job.

  1. Can you go to online meetups?
  2. Find more places to interact with people online?

Write this all down, and when you’re worried you’ll at least have the comfort of knowing there will be some things you can do if and when your job goes away.

We’re all in this together

As with most big problems, there is only so much you can do as an individual: to meaningfully improve the situations we need to work together, whether in mutual aid groups or via political organization. On the other hand, you need to ensure that you as an individual are doing OK; you can’t help others if you’re collapsing under your own troubles.

And since this can all be overwhelming, start with a few simple actions:

  1. Cut an expense or two.
  2. Get in touch with some old co-workers.
  3. Sign up for a newsletter.
  4. Start writing down your fallback plans.

And then, once you have things under control emotionally, when you have a plan and you know what you’re doing next, start thinking about how you can help others, and work with other people to improve things for everyone.



Tired of scrambling to get your job done?

If you were productive enough, you could take the afternoon off, confident you’d produced high value work. Not to mention having an easier time finding a new job when you need one.

Learn the secret skills of productive programmers.

15 May 03:12

Two Months of Sheltering in Place

by Richard

The weeks have gone by a lot faster these past 30 days than the first 30 days. Toronto is slowly loosening it’s restrictions, allowing curbside pickup from retail stores, though all restaurants are still open only for delivery or pickup. So far none of the restaurants I get food from have closed permanently, though I don’t have a lot of hope for the diner across the street from me. I take a walk each day, weather permitting, go grocery shopping for snacks once a week, and as of today, I finally have masks that fit me. A family friend, Ruby, made them for not just our whole family, but my sister’s in-laws as well. We call them our Ruby Masks. It was a heartwarming gesture, one I've seen friends on social media gush about as well when their friends make them masks. I don't know if we're all in this together, but a lot of people are stepping up.

My task list is just as long as it was before the pandemic. If I've said "I'm too busy to do [xyz]," well, that wasn't the reason. I watch more movies than usual, with Saturday having become movie night with co-workers. We stream it over one of the video chat services, and we make up for the low quality with camaraderie during the film. There’s a demand for a service like this, even in a post-pandemic world, or a world in which movie theatres still exist, because of the distributed nature of teams and friends. A browser plugin won’t cut it.

The weather is warming up, and even in a cold May, I’m out on the balcony in the sunlight. I went on my first bike ride of the pandemic to Trinity Bellwoods Park on a warm Saturday, which was followed by a week of cold weather. That all changed this week. I can finally drop my parka off at the dry cleaner.

I've taken two streetcar trips, both of which were back from a computer repair place, having walked to get there. I took note of the time and ID number of the streetcars, but that has been at least 2 weeks ago. The next streetcar I board, I'll be wearing a mask. I still keep track of every place I go using Foursquare's Swarm. While before the pandemic it was a game, the winner getting more points based on how many places they visit, now it has taken on the sombre task of logging where I was in case I need to retrace my steps up to 14 days later.

All concerts I had tickets for to this point have been either cancelled or postponed. I expect the same of June concerts, and any concert happening this year. I know why sports leagues have not cancelled their seasons (they traffic in optimism), though it's all but certain no games will be shown in front of live crowds in North America in 2020.

It's easy to rationalize away the things I miss. "Restaurants weren't such a great experience anyway." "Concerts always had annoyances that I don't miss." "Museums are boring." But I'll go to them when they re-open, taking any precautions that's asked of me. I'm still optimistic that we will get to enjoy what we enjoyed before the pandemic, just in a different way.

15 May 03:12

Uber Eats reveals Canadians’ top comfort food amid COVID-19 pandemic

by Dennis Price

The COVID-19 pandemic has changed our lives drastically, but the one constant that hasn’t shifted is food cravings.

Canadians are continuing to feel hunger for specific food during the quarantine, according to Uber Eats Canada.

The takeout and delivery service collected data on what food Canadians are ordering through its service.

These are the top ten most-ordered food items enjoyed by Canadians:

  1. Butter Chicken
  2. Poutine
  3. Miso Soup
  4. Pad Thai
  5. Naan
  6. Fries
  7. Bubble Tea
  8. Edamame
  9. Shawarma
  10. Onion Rings

Coming in at number two is, of course, poutine, which is very on-brand of Canadians.

Canadians’ top comfort food is based on data Uber Eats Canada gathered from March 16th to May 10th. The Uber Eats app is available on iOS and Android.

The post Uber Eats reveals Canadians’ top comfort food amid COVID-19 pandemic appeared first on MobileSyrup.

15 May 03:05

Twitter Favorites: [termie] Ran into another morning walker https://t.co/MPRk0Hq2kQ

termie @termie
Ran into another morning walker pic.twitter.com/MPRk0Hq2kQ
15 May 03:04

The Urban Forest of Vancouverism

by Gordon Price
mkalus shared this story from Price Tags.

This is what we planted in the 1990s: a landscape design from the post-Expo era that has come to be known as ‘Vancouverism.”

Downtown South was in a post-rezoning boom, and Hong Kong investment, families and sensibilities were arriving – evident on the 800-block of Hamilton Street where the major tower, completed in 1995, is named ‘Jardine’s Lookout’ (a mountain and residential area on Hong Kong Island).

Now, a generation later, it is surrounded by a maturing urban forest.

The 1991 Downtown South rezoning was accompanied by a neighbourhood-specific streetscape manual in 1994, meant to provide a greener, quieter identity on what would otherwise be traffic-heavy arterials.  Influenced by Erickson and Oberlander’s landscaping of Robson Square, the sidewalks would all have a double row of trees, with increased setbacks and, in this case, a heritage garden (all paid for by the developers, from building to curb.)

Note how there are four levels of landscaping, from bushes and hedges at grade, to the rows of trees, to the gardens on decks and roofs.  Foliage surrounds the pedestrian on every side, and above, proving that high-density urban environments can be greener and more lush than any grass-dominant suburb.

Regrettably, the curb-adjacent planting strips (inspired by West-End boulevards) could not handle the foot traffic along the metered streets, and so the grass has been replaced over time with asphalt, brick, concrete and astroturf.  Having been the councillor who pushed for grass curbing in the original urban design, I regret my over-optimism on its survival, but do wish we had gone for something both permeable and able to withstand the wear-and-tear.

This is an urban forest in its adolescence.  And it’s not the only block.  Throughout Downtown South, from Robson to Pacific, Granville to Yaletown, the streets are becoming so lush and thick with foliage, we’ll already have to consider how we’re going to thin them out.

 

15 May 03:04

I often wonder whether there’s anyone on earth who has placed their trust in Boris Johnson & not come to regret it. If there is, of course, it’s only a matter of time...

by mrjamesob
mkalus shared this story from mrjamesob on Twitter.

I often wonder whether there’s anyone on earth who has placed their trust in Boris Johnson & not come to regret it.
If there is, of course, it’s only a matter of time...




1125 likes, 172 retweets