Shared posts

28 Apr 00:22

Practicing the Coding Challenges

As part of my job hunt I’ve been doing some problems on LeetCode to prepare for coding challenges.

I don’t have a CS degree, but I have decades of experience — I know what a linked list is, for instance, and could write one by hand easily if called to. In a few different languages, even. I could talk about the trade-offs between a linked list and a contiguous array. Etc. I’ve got all that.

But these tests are kicking my butt a little bit. I think I’ve figured out why.

Consider a question like this:

You need to add two numbers and get a sum. But, importantly, the digits of those numbers are stored in arrays, and they’re backwards.

The return value also needs to be in a backwards array.

If inputs are [4,2,6] and [3,8,0,8], the function should return [7,0,7,8] — because 624 + 8083 == 8707.

My style of coding is to break problems into steps and make it super-obvious to other people — and future-me — what the code is doing. I like to write code so clear that comments aren’t needed.

I’d start with a top-level function something like this:

let num1 = number(from: array1)
let num2 = number(from: array2)
let sum = num1 + num2
return array(from: sum)

That’s clear, right? There are two functions referenced in the above code that are clearly transformers — one goes from an array to a number, and the other goes from a number to an array.

So the next steps are to fill those in, along with any additional helper functions.

If I were on the other side of the table, and this is what the candidate did, I would be quite happy — because they’ve achieved not just correctness but clarity. They’ve solved the problem using a coding style that I’d want to see in production code.

But that’s not what these questions want to see at all.

What the Questions Actually Want

What they want — at least in the experience I’ve had so far — is for you to have some kind of insight into the problem that allows you to solve it in a more efficient way.

You may have already figured it out for this particular question: but, just in case not, here’s the tip — the answer should mirror the way we actually do sums on paper.

Remember that we go right-to-left, and we build up the answer digit-by-digit.

   624
+ 8083
------
  8707

The arrays are already backward, even! So just write a loop that does exactly what you do when doing this by hand (including the carry-the-one part). You create the answer — the [7,0,7,8] — as you go along.

I’m Not Sure What to Think About This

In production code, if a problem like this came up, I’d ask “How the hell did we get here?” and try to backtrack and figure out what insanity caused this, because it’s just not right.

But, if this code were truly needed, I’d write code the way I normally would, with clarity in mind first.

And then, if my tools told me it was too inefficient with time or space, I’d figure out a more efficient version.

These questions, then, are able to test what you might come up with when you’re in that position.

The thing is, what I would most want to know is how people write code for the 99% of time when they’re not in that position. That’s not being tested here.

28 Apr 00:22

Test Your Ideas First

by Richard Millington

Plenty of organisations right now are demonstrating COVID-19 sections which fail to gain much traction.

This is a classic example of top-down planning. You’re creating a place for discussions that aren’t happening.

If members weren’t talking about the topic before you launched the section, they’re not likely to spend much time talking about a topic after you launch an area for it.

However, if you do still want to try it, test a few discussions first and see if they gain momentum.

If the discussions work, try a few webinars or a blog post and see how popular that proves to be.

If that works (and momentum is growing) then launch a dedicated area for the topic.

You can save yourself a lot of time by testing your ideas first.

28 Apr 00:22

Data science as a path to integrate computing into K-12 schools and achieve CS for All

by Mark Guzdial

My colleague Betsy DiSalvo is part of the team that just released Beats Empire, an educational game for assessing what students understand about middle school computer science and data science https://info.beatsempire.org The game was designed by researchers from Teachers College, Columbia University; Georgia Tech; University of Wisconsin, Madison; SRI International; Digital Promise; and Filament Games in concert with the NYC Dept. of Education. Beats Empire is totally free; it has already won game design awards, and it is currently in use by thousands of students. Jeremy Roschelle was a consultant on the game and he just wrote a CACM Blog post about the reasoning behind the game (see link here).

Beats Empire is an example of an important development in the effort to help more students get the opportunity to participate in computing education. Few students are taking CS classes, even when they’re offered — less than 5% in every state for whom I’ve seen data (see blog post here). If we want students to see and use computing, we’ll need to put them in other classes. Data science fits in well with other classes, especially social studies classes. Bootstrap: Data Science (see link here) is another example of a computing-rich data science curriculum that could fit into a social studies class.

Social studies is where we can reach the more diverse student populations who are not in our CS classes. I’ve written here about my work developing data visualization tools for history classes. For a recent NSF proposal, I looked up the exam participation in the two Advanced Placement exams in computer science (CS Principles and CS A) vs the two AP exams in history (US history and World history). AP CS Principles was 32% female, and AP CS A was 24% female in 2019. In contrast, AP US History was 55% female and AP World History was 56% female. Five times as many Black students took the AP US History exam as took the AP CS Principles exam. Fourteen times as many Hispanic students took the AP US History exam as took the AP CS Principles exam.

Data science may be key to providing CS for All in schools.

28 Apr 00:21

Running the bava’s DNS through Cloudflare

by Reverend

So, one of the themes of 2020 for me (even before the virus) has been experimenting with the hosting of my blog. Back in January I moved my blog away from shared hosting on Reclaim to its own server on Digital Ocean (there were insinuations I was slowing down our infrastructure 🙂 ). That was fun and educational. In March I used the excuse of exploring the container-based WordPress hosting service Kinsta to move this blog away from Digital Ocean and onto that service. My site was definitely faster, but at close to $100 per month for my traffic tier it was more than double the cost of my own droplet, so back to Digital Ocean I went before my 30-day free trial were up. 

Couple of things lingered from that experiment: I started using more robust caching on my server and I wanted to explore further the possibilities of Content Delivery Networks (a.k.a CDNs). So, trying to keep the always rewarding discovery time alive, I started playing with Cloudflare this weekend. How to explain what Cloudflare is? Well, it is similar to transactional email services like Mailgun or Sparkpost, but for DNS. So, if you have a cPanel account you can run your email, manage your DNS, and store your files all in the same place, it’s an all-in-one hosting solution. But as you start creating apps outside that environment, you have to manage not only the cloud-based hosting service you install your app on (such as a Digital Ocean droplet or an AWS EC-2 instance or a Linode node), but you also need to manage the DNS for the account as well as the email. So, you need various service to run that VPS, and many of them have APIs that allow the services to integrate behind the scenes. Cloudflare is a service that is by-and-large dedicated to manage your DNS in the cloud, and by extension has become one of the biggest CDNs on the planet. 

Managing DNS entries in cPanel

Traditionally I have used cPanel’s Zone Editor to manage DNS (pictured above) for all my domains, but I figured it was high time I start exploring Cloudflare for myself given we’ve had more than a few questions about the service from folks using Reclaim.

So, what does this look like? Well, first thing is you can sign-up for a free account which will basically give you access to managing DNS for free, DDoS protection, and some basic CDN options. You can see the breakdown of their plans here, but the long and the short of it is that if you want to use the more extensive CDN options, more caching options, access to load balancing and failover, etc. you going to have to start paying anywhere from $20-$200 per month, ranging from a personal to business level of options. But first things first, first things is you need to sign-up for an account and then point your nameservers to Cloudflare for the domain you are hosting through them. They have a pretty solid tool that discovers all the DNS records for that domain, but keep in mind I have found it does not find subdomains, so those need to be re-created manually. This blog is on its own VPS (157.230.99.19), but all the subdomains and other services (save email which is through Mailgun) are reverting to my cPanel account where this domain used to be managed and where the subdomains still live, namely 165.227.229.15. So, DNS is essentially routing the domain names to various IP addresses, and that can be in the form of A records, CNAMEs, MX records, etc. So that is what the free version of Cloudflare offers, and if you need to route domain names and do not have DNS built into the application on your server this service would be a quick, cheap, and easy solution than trying to figure out DNS at the server level on your own.

While the free option will work for most, but I wanted to start drilling down into more options through the CDN. Particularly I wanted to see if they can compress my images and make the site run faster from where ever it’s loaded. The bava site is pretty inefficient to begin with given I never optimize images and I have way too many scripts running, so services like Cloudflare could help speed-up a site’s loading time for lazy people like me (a result of the year’s of online kipple that accumulate over time). The other thing I apprecaite about Cloudflare is insight to how much data is being service, what is cached, and how much of the traffic the site is getting is actually real. The numbers here are once again inflated as they were with Kinsta, but the ratio of requests to unique visitors seems more inline—and I am not paying per request or visit (or so I hope :). The other thing that is kind of nice is that they give you a shared SSL cert as part of the service, so that is taken care of by Cloudflare, and you can upgrade for a dedicated certificate or to upload your own, but that is not part of the $20 monthly Pro plan I have signed-up for.

But it is definitely increasing the site load time that made me interested in Cloudflare, and according the the Speed tab, it’s working:

So, if I am reading this right, what was usually taking anywhere from 3 seconds to a minute to load before Cloudflare is now taking anywhere from a second to 3 seconds for the user, which is definitely a good thing. One of the options you get in the Pro plan is the Polish option to increase image load time, and that is something I enabled, so we’ll see if that makes a sustained difference.

The other key element is caching, and the options seem pretty basic and simple, which I like, and I have to see if this caching interferes with caching already in place on WordPress through plugins. 

So all this as a way to document where I am with Cloudflare thus far, there are a few things I still want to explore though. They enable you to setup load balancing and failover between two servers. This is absolute overkill for my blog, acknowledging that, I still want to play with this because I love the idea of the site going down and instantly traffic is directed to a stand-by bava so no viewer is ever at risk of not being able to read my drivel at all times 🙂 That is in the traffic section, so more to discover through a service they call Argo. The other thing I would be interested in exploring is Streaming, I wonder what that would look like and how it would work, so that might be the topic of another post. 

28 Apr 00:21

The Ghost of bava

by Reverend

This is kind of a record keeping post, it turns out when you’ve been blogging for nearly 15 years posts can be useful to remind you of what you did years earlier that you presently have no recollection of. It’s my small battle against the ever-creeping memory loss that follows on the heels of balding and additional chins—blog against the dying of the light!

Anyway, I’m trying to keep on top of my various sites and recently I realized that as a result of extracting this blog out of my long-standing WordPress Multisite in 2018, followed by a recent move over to Digital Ocean this January a number of images in posts that were syndicated from bavatuesdays to sites like https://jimgroom.umwblogs.org were breaking. The work of keeping link rot at bay is daunting, but we have the technology. I was able to login to UMW Blogs database and run the following SQL query:

UPDATE wp_13_posts SET post_content = replace(post_content, 'http://bavatuesdays.com/files/', 'https://bavatuesdays.com/wp-content/uploads/')

That brought those images back, and it reminds me that I may need to do something similar for the ds106.us site given I have a few hundred posts syndicated into that site that probably have broken images now. 

But the other site I discovered had broken images as a result of my various moves was the Ghost instance I’ve kept around since 2014. I initially started this site as a sandbox on AWS in order to get a Bitnami image of Ghost running, which was my first time playing with that space in earnest back in 2014. That period was when Tim and I were trying to convince UMW’s IT department to explore AWS in earnest. In fact, we would soon move UMW Blogs to AWS as a proof-of-concept but also to try and pave the way for hosting more through Cloud-based services like Digital Ocean, etc. 

It’s also the time when the idea of servers in the “Cloud” seemed amazing and the idea of new applications running on stacks other than LAMP became real for me. Ghost was one of those. It was the promise of a brave new world, a next-generation sandbox, which was around the time Tim setup container-based hosting for both Ghost and Discourse through Reclaim Hosting as a bit of an experiment. Both worked quite well and were extremely reliable, but there was not much demand and in terms of support it continued to rely too heavily on Tim for us to sustain it without a more robust container-based infrastructure. We discontinued both services a while back, and are finally shutting down those servers once and for all. And while we had hopes for Cloudron over the last several years, in the end that’s not a direction we’re planning on pursuing. Folks have many options for hosting applications like JupyterHub and the like, and the cost concerns of container-based hosting remains a big question mark—something I learned quickly when using Kinsta.

Part of what makes Reclaim so attractive is we can provide excellent support in tandem with an extremely affordable service. It’s a delicate balance to say the least, but we’ve remained lean, investment free, and as a result have been able to manage it adroitly. We are still convinced that for most folks a $30 per year hosting plan with a free domain will go a long way towards getting them much of what they need when it comes to a web presence. If we were to double or triple that cost by moving to a container-based infrastructure it would remove us from our core mission: provide affordable spaces for folks to explore and learn about the web.* What’s more, in light of the current uncertainties we all face we’re even more committed to keeping costs low and support dialed-in. 

Ghost in a Shell

So, I’m not sure why this record keeping post became a manifesto on affordability, but there you have it 🙂 All this to say while we have been removing our Discourse forum application servers we also decided to use the occasion to migrate our Ghost instances that we’re currently hosting (which are only a very few) to shared hosting so that we can retire the my.reclaim.domains server that was running them on top of Cloudron. So, Tim and I spent a morning last week going over his guide for setting up Ghost through our shared hosting on cPanel, and it still works.† The only change is now you need to use Node.js version 10+ for the latest version of Ghost.

He migrated his Ghost blog to our shared hosting, and I did the same for mine (which only has a few posts).  He has been blogging on Ghost for several years now and I have to say I like the software a lot. It’s clean and quite elegant, and their mission and transparency is a model! But if you don’t have the expertise to install it yourself (whether on cPanel or a VPS) hosting it through them comes at a bit of a cost, with plans starting at about $30 per month. That price-point is a non-starter for most folks starting out. What’s more, there’s little to no room to dig deeper into the various elements of web hosting afforded by cPanel for an entire year (including a domain) versus the same cost for just one month of hosting for only one site.

So, I have toyed with the idea of trying to move all my posts over to Ghost, but when I consider the cost as well as the fact it has no native way to deal with commenting cleanly, it  quickly becomes a non-starter. With over 14,000 comments on this blog, I can’t imagine they would be migrated to anything resembling a clean solution that would not result in just that much more link rot. I guess I am still WordPress #4life 🙂


*And while it remains something we are keenly interested in doing, we are not seeing it as an immediate path given the trade-off between investment costs and the idea a per-container costs for certain applications which would radically change our pricing model.

†He had to help me figure out some issues I ran into as a result of running the commands as root.

28 Apr 00:21

We should be angry for our communities, not at them

Gerry Robinson, Schools Week, Apr 27, 2020
Icon

This is a sentiment I've heard a lot recently: "we don’t want it to go back to normal. Normal wasn’t working." And I'm sympathetic. The question, is, how do we get from here to there? The message in one deprived school district is this: "'we’re all in this together' and 'everyone needs to play their part'. How desperately insulting to suggest this has ever been the case. Our children have long lived with the pre-existing condition of staggering deprivation." Our focus with both relief and in the reconstruction that follows needs to be not at the top but at the bottom - making sure those most in need have housing, food, health care, education, clothing, connectivity, a voice.... That in itself is a sufficient challenge for the economy and morality of our times.

Web: [Direct Link] [This Post]
28 Apr 00:20

Frozen City: Idling Times

by Gordon Price

A two-minute layover on the No. 7 Nanaimo.  Waiting for time to catch up with the schedule.

The buses have never been so fast and on time, and never so empty.

28 Apr 00:20

Coronavirus Genome 2

mkalus shared this story from xkcd.com.

[moments later, checking phone] Okay, I agree my posting it was weird, but it's somehow even more unnerving that you immediately liked the post.
28 Apr 00:20

Time to Talk Workloads

Alex Usher, Higher Education Strategy Associates, Apr 27, 2020
Icon

As is so often the case (and is so often the great failure of economists), "simple math" fails to tell the tale. And so also here. Alex Usher writes, "smaller classes mean more instructors. That could mean hiring boat-loads of graduate students and sessionals. But who has money for that during the crisis?  That leaves the existing professoriate." And so, he concludes, "until the end of the emergency, most non-COVID-related research (broadly defined) should be on hold...  Stop the tenure clocks, freeze promotion and merit increases (there will be no money for either, anyway). But change up the workloads." This may be what simple math says. But I can't imagine it being reality.

Web: [Direct Link] [This Post]
28 Apr 00:15

Key Skill: Self-Directed Learning

Silvia Tolisano, LangWitches, Apr 27, 2020
Icon

If anything has become clear in recent weeks, it has been this: "Remote learning relies on students to being self-directed learners... We need to start planting the seeds early for a more self-directed learning vs. showing up for class and waiting-to-see-what-a- teacher-has-planned for us today!" This includes developing skills to create "documentation directed to aid self-awareness, fuel motivation to learn, and support decision-making concerning what wants or needs to be learned or can be learned next." Additionally, there's curation, "a vital skill that directly connects with self-directed learning, but it also develops and strengthens the now skills and literacies in the process." Tolisano also includes things like choice and voice, web literacy, and personal learning networks.

Web: [Direct Link] [This Post]
28 Apr 00:13

Domestic telepresence at scale: some notes

After I posted about video calls, doing stuff together, and the TV room, there was a great discussion on Twitter. Listen to some of these ideas:

  • On remote togetherness without having to have anything to say, Tom Critchlow speculates: perhaps an e-ink dithered image that is an always-on video feed of my living room? Shared with family and friends?
  • And later in the same conversation, Another way to think about it – always on HD video of the living room shared with family/friends but only at specific times like every day 2-5pm or something.

…which I’m into as an idea! I like the idea of a scheduled group call that fires up on your TV at 2pm for an hour whether you’re there or not. Also I think there’s a lot to be said for ambient noise – it’d be neat to hear the muffled sound of activity on the other side of the TV screen.

There was a good discussion about how you could do a “keep the doors open” kind of telepresence, but without massive privacy violations. Even a dithered e-ink screen would today require an internet-connected, always-on camera pointing directly into your house. Scary.

  • Scott Jenson talked about how that could be done: What about local only analysis of an HD signal? The video never leaves the camera, but it does output who it recognizes and that gets shared with only the immediate family. – and I like this idea of compressed “computer vision dithering” that shares only one of those green boxes and labels diagrams rather than the full feed.
  • How could you guarantee and certify that, asks Han Gerwitz., Maybe we need some sort of visibly low-bandwidth networking. Sensing cameras that output only a QR code display, with scanning cameras collecting from them.

I really like the idea of computer networking protocols that are human readable. I have a vague memory of reading about an audible protocol that sounded like birdsong? The advantage being that if a hacker was trying to get into your network, you would hear it.

On the other end of things:

  • Andrew Eland pointed me at Tonari which makes massive room-scale screens that show other, remote rooms, with the people projected there at 1:1 scale: tonari is an open doorway to another place. (Here’s Andrew’s tweet.)

Tonari is for businesses, it’s not made for the domestic context. And I like the idea that sometimes you’re having a meeting, but sometimes you fall into an opportunistic water-cooler chat, and the rest of the time your colleagues are just there, the ebbs and flows of the two offices brought together, however far apart.

I mean…

Some of these ideas are pretty weird. BUT. But. Why not give them a go? The feeling of it all. It would be good.


I’ve been trying to put a name to what it is I’m circling, and the best I can I can come up with is this: domestic telepresence at scale.

By telepresence there’s the usual meaning, being telepresence robots (which come and go in the zeitgeist, that post is from 2011). But ALSO AND MORE REALISTICALLY I would include group video calls. The robots being something more like “peak” telepresence?

Actually, I would include anything that contributes to this feeling of togetherness… fictional ideas like the idea above of transmitting the ambient noise of the home so I feel like a family member is in an adjacent room, and also real ones like “availability” indicators, and WhatsApp read receipts, and so on.

What I’ve learnt from my experiences with casual Zooms with friends, or hanging out in social Slacks, or doing workouts with my family in Australia with a combination of YouTube Live and group FaceTimes is that these technologies all combine to produce a sense of togetherness, they all count!

Telepresence isn’t something you step into, rather it’s a gradient. Your attention can be fully or only partially split between the local and the remote. And you’re in multiple groups of course, each of which has different compositions and norms. So you need all these different approaches at different levels for peak telepresence to even have a chance to occur.

Then there’s that word domestic.

I can’t even imagine what this lockdown would have been like without the internet. I’m with my family and friends, even though we’re physically sometimes thousands of miles apart. And although I’m giving the internet credit, I will also say that the internet has not really served us that well. The fact that we have this sense of togetherness right now, domestically, is because there is an amazing mass co-opting of technology going on:

Zoom was not made for people to play bridge! Facetime wasn’t created to be left propped up in a corner while we sing “twinkle twinkle little star” and then wander off to make a cup of tea while people come and go from the room. But imagine if these technologies had been built for these behaviours!

Look. People in business will do all kinds of bonkers things if there is Return On Investment.

But “domestic” means having TVs which are shared devices, and phones which are private. It means group accounts. It means old people and young people. It means different rooms in the house. The domestic world is more diverse, more messy, and more demanding. Software isn’t built for this.

What’s particularly energising about this period of forced experiments, as Benedict Evans calls it, or less abstractly, “not being allowed to see our friends”, is that I’ve been reminded that telepresence is powerful and people want it when it works and there is a absolute TON for us still to explore.

At scale.

The reason I include “scale” is because I want to figure out how to continue to have this sense of togetherness available for everyone… even after we’re allowed to leave our homes.

When the lockdown ends, a lot of this co-opting of technology will end too. That’s a shame.


Anyway, in all of this domestic telepresence I would include

  • door-sized big screens that are telepresence-teleporters into other homes
  • e-ink portals and low-bandwidth sensors, maybe even our old Availabot presence toy (2006!)
  • tabletop robots that can be inhabited by my family members: Ross Atkin’s AMAZING tabletop telepresence device is a DIY robot made from a cardboard kit and a phone, and you drive it around your friend’s kitchen a hundred miles away with your face talking out of it.

In what world could Good Night Lamps be just at home on my shelves as my books are?

Any one of these on their own would be weird. But together, and normalised somehow…

I think what the “at scale” requirement makes me ask is: how could these be mainstreamed, and how could people almost compose their own experiences through all kinds of different services and devices, without having to do things like create a new social network in a thousand different places?

I don’t see that the answer could ever be a single service doing it “right”: there can’t be a single “winner” such as Zoom, or Facebook, or WeChat, no matter how many features they throw in. People and groups are too different.

So I think of iPhones and Amazon Echoes.

iPhones and Echoes both pass Google’s toothbrush test: Is it something you will use once or twice a day, and does it make your life better?

But nobody’s iPhone and nobody’s Echo is the same. We call it “downloading apps” but really what’s happening is that people are making their devices radically different – a camera for one person, a TV for the next. They’re twice-a-day toothbrushes, sure, but it’s a different toothbrush for everyone.

Maybe there’s a lesson here?


I guess what I’m speculating is a kind of social operating system that links all these different parts, and allows new ones to emerge.

Something that doesn’t abstract Zoom, Facebook, and Animal Crossing, but sits alongside them and someone provides visibility between them (or not, as appropriate).

What would it mean to see that your friends are congregating watching Netflix (remotely) while you’re still at work? How could they hassle and yell at you to come hang out already?

A digital photo frame that understands it’s in the shared front room and somehow shares only the pictures appropriate to that context from everyone in the house?

What about something that lets me and my group step up a gear from iMessage not into FaceTime, but into toy telepresence robots, playing an obstacle course that somebody has cobbled together in their front room?

Feeling my way around something here. Not sure what it is yet.

28 Apr 00:12

Some of Canada’s transit systems crushed at getting riders. And that was their weakness in pandemic

by Frances Bula

Some of you might not have seen my latest transit story because it ran in the Alberta pages of the Globe. But it’s relevant across the country.

I looked at the difference between Edmonton and Calgary and, it turned out, that explained some of the differences in other parts of the country. It explained by Vancouver’s TransLink, one of the most successful transit operations in the country, was the first to have to make massive layoffs. Toronto, even more successful, was second. Winnipeg, which has healthy ridership too, also had to lay off and Calgary is looking at it if things go on much longer.

Why? The answer is in my story, here and text pasted after the turn.

SPECIAL TO THE GLOBE AND MAIL
SHARE

Until the world turned upside down a couple of months ago, having a transit system that was heavily sustained by riders – rather than taxpayers – was considered a public-policy victory.

Robust fare revenues in Calgary, Vancouver and Toronto meant cities spent less proportionally topping up the systems with property-tax money than places such as Edmonton, where ridership was lower.

But those cities are victims of their own success as ridership plummets during the pandemic, leaving gaping holes in their operating budgets. Civic officials have responded by making painful service cuts and mayors are pleading for federal help.

According to information from the cities, more than half of transit in Calgary, Vancouver and Toronto was financed through fares in 2019: 55 per cent, 57 per cent and 69 per cent respectively. Fares made up 40 per cent of Edmonton’s transit revenue in that year.

“How much operating pressure they’re under now is because of how successful they were in the past,” says consultant Tamim Raad, a former planning director of B.C.’s transit agency, TransLink.

Without federal relief for transit – something all Canadian cities joined together to ask for officially this week – Calgary is expected to follow in the footsteps of Vancouver and Toronto, which announced massive transit layoffs this week.

Calgary Mayor Naheed Nenshi said the city can’t support the transit system indefinitely at the current rate, even though it has already reduced service by about 15 per cent.

“It’s not sustainable. We cannot go on like this forever. We will have to look at further cuts,” the mayor said during his weekly update on Wednesday for the news media.

Calgary’s predicament is in contrast to Edmonton, a smaller system that gets more of its support from property taxes and less from fares.

So Edmonton’s mayor described a far less dire situation this week than Mr. Nenshi.

In that city’s weekly update on Thursday, Mayor Don Iveson gave no indication of any more trims than the night-service cuts that went into effect on Monday.

“We continue to consider transit an essential service.”

Calgary made a strategic decision years ago to spend its limited transit dollars on more coverage and not running any of its C-Train lines underground. The system is considered a North American transit success story with its 106 million-trips-a-year ridership in a city of 1.3-million. About 45 per cent of people who travel to downtown Calgary use the light-rail.

Edmonton chose early to spend a lot of money putting its light rail underground downtown, which left less money for more kilometres of lines elsewhere. It’s ridership, pre-pandemic, was 87 million trips a year in a city of nearly a million.

Calgary had budgeted to get $170-million in transit-fare revenue for 2020. It is now losing $10-million to $12-million a month, although the system is still carrying 100,000 people a day.

Edmonton had forecast $131-million, and has projected it is likely to lose $28-million from the point in March when revenues started going down until mid-June.

Edmonton Transit Service did cause a flurry of concern earlier this week, when it cut late-night service, ending light-rapid trains at 10 p.m. and buses at midnight.

The move has prompted alarm because of its impact on people working late shifts, many of them essential workers.

“We are getting a lot of calls from members on the front line, doing home care, in acute care, also group homes and corrections,” said Karen Weiers, vice-president at the Alberta Union of Public Employees. “We recognize the strain on the municipalities, but it looks like Edmonton is trying to solve one problem by creating a whole lot of others.”

By Wednesday, the city had received 300 calls to its help line, 85 per cent of them from health care workers. Edmonton’s interim city manager, Adam Laughlin, said people are being provided with taxi chits until the city can work out another way to get them home at night.

But Ms. Weiers said the city also appears to be suggesting that employers will provide a solution, which they have not done so far.

In both cities, people are worried about the recent cuts or ones that might be to come.

In Calgary, Mike Mahar of the Amalgamated Transit Union said he is waiting for the bad news for his union members.

“I talked to the employer this afternoon. It’s as grim as Toronto or Vancouver,” he said on Thursday night.

His biggest concern, if Calgary also announces major layoffs, is the crowding that will happen as those still trying to get to work pile in to what’s left.

In Vancouver, drivers report that squabbles sometimes break out at bus stops as people fight over who will get the few seats. Some simply get on no matter what, ignoring the signs on seats about distancing.

TransLink, the agency that manages transit and transportation for B.C.’s Lower Mainland, announced it would lay off 1,500 employees on Monday, after warning that it was losing $75-million a month in revenue.

It was the first to “cross the red line,” as one bureaucrat put it, because it operates independently of cities. So it doesn’t have big-city reserves to rely on.

But even the Toronto Transit Commission, which is part of the municipal budget, announced 1,200 layoffs on Thursday.

And the Federation of Canadian Municipalities made a special point of including targeted help for transit in cities in recommendations to the federal government on Thursday.

The FCM said Canadian city transit systems are losing $400-million a month during the crisis.

The federal government has been much slower to respond than the U.S. government, which announced US$25-billion for transit relief on April 2.

But the Canadian federal government has never been involved in helping with operating costs of transit the way the U.S. one has, said Marco D’Angelo, CEO of the Canadian Urban Transit Association.

“It’s new to them, the idea of embracing operations, even for a short period.”

One piece of the transit system that remains untouched is federal contributions to capital investments.

That’s why Vancouver and Calgary are still planning to go ahead with big new rapid-transit projects. Some smaller projects in Calgary and Edmonton, like the latter city’s bus-network redesign, are being put on hold.

 

 

28 Apr 00:10

Hugo vs Jekyll: an epic battle of static site generator themes

by hello@victoria.dev (Victoria Drake)

I recently took on the task of creating a documentation site theme for two projects. Both projects needed the same basic features, but one uses Jekyll while the other uses Hugo.

In typical developer rationality, there was clearly only one option. I decided to create the same theme in both frameworks, and to give you, dear reader, a side-by-side comparison.

This post isn’t a comprehensive theme-building guide, but intended to familiarize you with the process of building a theme in either generator. Here’s what we’ll cover:

Here’s a crappy wireframe of the theme I’m going to create.

A sketch of the finished page

If you’re planning to build-along, it may be helpful to serve the theme locally as you build it; both generators offer this functionality. For Jekyll, run jekyll serve, and for Hugo, hugo serve.

There are two main elements: the main content area, and the all-important sidebar menu. To create them, you’ll need template files that tell the site generator how to generate the HTML page. To organize theme template files in a sensible way, you first need to know what directory structure the site generator expects.

How theme files are organized

Jekyll supports gem-based themes, which users can install like any other Ruby gems. This method hides theme files in the gem, so for the purposes of this comparison, we aren’t using gem-based themes.

When you run jekyll new-theme <name>, Jekyll will scaffold a new theme for you. Here’s what those files look like:

.
├── assets
├── Gemfile
├── _includes
├── _layouts
│   ├── default.html
│   ├── page.html
│   └── post.html
├── LICENSE.txt
├── README.md
├── _sass
└── <name>.gemspec

The directory names are appropriately descriptive. The _includes directory is for small bits of code that you reuse in different places, in much the same way you’d put butter on everything. (Just me?) The _layouts directory contains templates for different types of pages on your site. The _sass folder is for Sass files used to build your site’s stylesheet.

You can scaffold a new Hugo theme by running hugo new theme <name>. It has these files:

.
├── archetypes
│   └── default.md
├── layouts
│   ├── 404.html
│   ├── _default
│   │   ├── baseof.html
│   │   ├── list.html
│   │   └── single.html
│   ├── index.html
│   └── partials
│       ├── footer.html
│       ├── header.html
│       └── head.html
├── LICENSE
├── static
│   ├── css
│   └── js
└── theme.toml

You can see some similarities. Hugo’s page template files are tucked into layouts/. Note that the _default page type has files for a list.html and a single.html. Unlike Jekyll, Hugo uses these specific file names to distinguish between list pages (like a page with links to all your blog posts on it) and single pages (like one of your blog posts). The layouts/partials/ directory contains the buttery reusable bits, and stylesheet files have a spot picked out in static/css/.

These directory structures aren’t set in stone, as both site generators allow some measure of customization. For example, Jekyll lets you define collections, and Hugo makes use of page bundles. These features let you organize your content multiple ways, but for now, lets look at where to put some simple pages.

Where to put content

To create a site menu that looks like this:

Introduction
    Getting Started
    Configuration
    Deploying
Advanced Usage
    All Configuration Settings
    Customizing
    Help and Support

You’ll need two sections (“Introduction” and “Advanced Usage”) containing their respective subsections.

Jekyll isn’t strict with its content location. It expects pages in the root of your site, and will build whatever’s there. Here’s how you might organize these pages in your Jekyll site root:

.
├── 404.html
├── assets
├── Gemfile
├── _includes
├── index.markdown
├── intro
│   ├── config.md
│   ├── deploy.md
│   ├── index.md
│   └── quickstart.md
├── _layouts
│   ├── default.html
│   ├── page.html
│   └── post.html
├── LICENSE.txt
├── README.md
├── _sass
├── <name>.gemspec
└── usage
    ├── customizing.md
    ├── index.md
    ├── settings.md
    └── support.md

You can change the location of the site source in your Jekyll configuration.

In Hugo, all rendered content is expected in the content/ folder. This prevents Hugo from trying to render pages you don’t want, such as 404.html, as site content. Here’s how you might organize your content/ directory in Hugo:

.
├── _index.md
├── intro
│   ├── config.md
│   ├── deploy.md
│   ├── _index.md
│   └── quickstart.md
└── usage
    ├── customizing.md
    ├── _index.md
    ├── settings.md
    └── support.md

To Hugo, _index.md and index.md mean different things. It can be helpful to know what kind of Page Bundle you want for each section: Leaf, which has no children, or Branch.

Now that you have some idea of where to put things, let’s look at how to build a page template.

How templating works

Jekyll page templates are built with the Liquid templating language. It uses braces to output variable content to a page, such as the page’s title: {{ page.title }}.

Hugo’s templates also use braces, but they’re built with Go Templates. The syntax is similar, but different: {{ .Title }}.

Both Liquid and Go Templates can handle logic. Liquid uses tags syntax to denote logic operations:

{% if user %}
  Hello {{ user.name }}!
{% endif %}

And Go Templates places its functions and arguments in its braces syntax:

{{ if .User }}
    Hello {{ .User }}!
{{ end }}

Templating languages allow you to build one skeleton HTML page, then tell the site generator to put variable content in areas you define. Let’s compare two possible default page templates for Jekyll and Hugo.

Jekyll’s scaffold default theme is bare, so we’ll look at their starter theme Minima. Here’s _layouts/default.html in Jekyll (Liquid):

<!DOCTYPE html>
<html lang="{{ page.lang | default: site.lang | default: "en" }}">

  {%- include head.html -%}

  <body>

    {%- include header.html -%}

    <main class="page-content" aria-label="Content">
      <div class="wrapper">
        {{ content }}
      </div>
    </main>

    {%- include footer.html -%}

  </body>

</html>

Here’s Hugo’s scaffold theme layouts/_default/baseof.html (Go Templates):

<!DOCTYPE html>
<html>
    {{- partial "head.html" . -}}
    <body>
        {{- partial "header.html" . -}}
        <div id="content">
        {{- block "main" . }}{{- end }}
        </div>
        {{- partial "footer.html" . -}}
    </body>
</html>

Different syntax, same idea. Both templates pull in reusable bits for head.html, header.html, and footer.html. These show up on a lot of pages, so it makes sense not to have to repeat yourself. Both templates also have a spot for the main content, though the Jekyll template uses a variable ({{ content }}) while Hugo uses a block ({{- block "main" . }}{{- end }}). Blocks are just another way Hugo lets you define reusable bits.

Now that you know how templating works, you can build the sidebar menu for the theme.

Creating a top-level menu with the pages object

You can programatically create a top-level menu from your pages. It will look like this:

Introduction
Advanced Usage

Let’s start with Jekyll. You can display links to site pages in your Liquid template by iterating through the site.pages object that Jekyll provides and building a list:

<ul>
    {% for page in site.pages %}
    <li><a href="{{ page.url | absolute_url }}">{{ page.title }}</a></li>
    {% endfor %}
</ul>

This returns all of the site’s pages, including all the ones that you might not want, like 404.html. You can filter for the pages you actually want with a couple more tags, such as conditionally including pages if they have a section: true parameter set:

<ul>
    {% for page in site.pages %}
    {%- if page.section -%}
    <li><a href="{{ page.url | absolute_url }}">{{ page.title }}</a></li>
    {%- endif -%}
    {% endfor %}
</ul>

You can achieve the same effect with slightly less code in Hugo. Loop through Hugo’s .Pages object using Go Template’s range action:

<ul>
{{ range .Pages }}
    <li>
        <a href="{{.Permalink}}">{{.Title}}</a>
    </li>
{{ end }}
</ul>

This template uses the .Pages object to return all the top-level pages in content/ of your Hugo site. Since Hugo uses a specific folder for the site content you want rendered, there’s no additional filtering necessary to build a simple menu of site pages.

Creating a menu with nested links from a data list

Both site generators can use a separately defined data list of links to render a menu in your template. This is more suitable for creating nested links, like this:

Introduction
    Getting Started
    Configuration
    Deploying
Advanced Usage
    All Configuration Settings
    Customizing
    Help and Support

Jekyll supports data files in a few formats, including YAML. Here’s the definition for the menu above in _data/menu.yml:

section:
  - page: Introduction
    url: /intro
    subsection:
      - page: Getting Started
        url: /intro/quickstart
      - page: Configuration
        url: /intro/config
      - page: Deploying
        url: /intro/deploy
  - page: Advanced Usage
    url: /usage
    subsection:
      - page: Customizing
        url: /usage/customizing
      - page: All Configuration Settings
        url: /usage/settings
      - page: Help and Support
        url: /usage/support

Here’s how to render the data in the sidebar template:

{% for a in site.data.menu.section %}
<a href="{{ a.url }}">{{ a.page }}</a>
<ul>
    {% for b in a.subsection %}
    <li><a href="{{ b.url }}">{{ b.page }}</a></li>
    {% endfor %}
</ul>
{% endfor %}

This method allows you to build a custom menu, two nesting levels deep. The nesting levels are limited by the for loops in the template. For a recursive version that handles further levels of nesting, see Nested tree navigation with recursion.

Hugo does something similar with its menu templates. You can define menu links in your Hugo site config, and even add useful properties that Hugo understands, like weighting. Here’s a definition of the menu above in config.yaml:

sectionPagesMenu: main

menu:  
  main:
    - identifier: intro
      name: Introduction
      url: /intro/
      weight: 1
    - name: Getting Started
      parent: intro
      url: /intro/quickstart/
      weight: 1
    - name: Configuration
      parent: intro
      url: /intro/config/
      weight: 2
    - name: Deploying
      parent: intro
      url: /intro/deploy/
      weight: 3
    - identifier: usage
      name: Advanced Usage
      url: /usage/
    - name: Customizing
      parent: usage
      url: /usage/customizing/
      weight: 2
    - name: All Configuration Settings
      parent: usage
      url: /usage/settings/
      weight: 1
    - name: Help and Support
      parent: usage
      url: /usage/support/
      weight: 3

Hugo uses the identifier, which must match the section name, along with the parent variable to handle nesting. Here’s how to render the menu in the sidebar template:

<ul>
    {{ range .Site.Menus.main }}
    {{ if .HasChildren }}
    <li>
        <a href="{{ .URL }}">{{ .Name }}</a>
    </li>
    <ul class="sub-menu">
        {{ range .Children }}
        <li>
            <a href="{{ .URL }}">{{ .Name }}</a>
        </li>
        {{ end }}
    </ul>
    {{ else }}
    <li>
        <a href="{{ .URL }}">{{ .Name }}</a>
    </li>
    {{ end }}
    {{ end }}
</ul>

The range function iterates over the menu data, and Hugo’s .Children variable handles nested pages for you.

Putting the template together

With your menu in your reusable sidebar bit (_includes/sidebar.html for Jekyll and partials/sidebar.html for Hugo), you can add it to the default.html template.

In Jekyll:

<!DOCTYPE html>
<html lang="{{ page.lang | default: site.lang | default: "en" }}">

{%- include head.html -%}

<body>
    {%- include sidebar.html -%}

    {%- include header.html -%}

    <div id="content" class="page-content" aria-label="Content">
        {{ content }}
    </div>

    {%- include footer.html -%}

</body>

</html>

In Hugo:

<!DOCTYPE html>
<html>
{{- partial "head.html" . -}}

<body>
    {{- partial "sidebar.html" . -}}

    {{- partial "header.html" . -}}
    <div id="content" class="page-content" aria-label="Content">
        {{- block "main" . }}{{- end }}
    </div>
    {{- partial "footer.html" . -}}
</body>

</html>

When the site is generated, each page will contain all the code from your sidebar.html.

Create a stylesheet

Both site generators accept Sass for creating CSS stylesheets. Jekyll has Sass processing built in, and Hugo uses Hugo Pipes. Both options have some quirks.

Sass and CSS in Jekyll

To process a Sass file in Jekyll, create your style definitions in the _sass directory. For example, in a file at _sass/style-definitions.scss:

$background-color: #eef !default;
$text-color: #111 !default;

body {
  background-color: $background-color;
  color: $text-color;
}

Jekyll won’t generate this file directly, as it only processes files with front matter. To create the end-result filepath for your site’s stylesheet, use a placeholder with empty front matter where you want the .css file to appear. For example, assets/css/style.scss. In this file, simply import your styles:

---
---

@import "style-definitions";

This rather hackish configuration has an upside: you can use Liquid template tags and variables in your placeholder file. This is a nice way to allow users to set variables from the site _config.yml, for example.

The resulting CSS stylesheet in your generated site has the path /assets/css/style.css. You can link to it in your site’s head.html using:

<link rel="stylesheet" href="{{ "/assets/css/style.css" | relative_url }}" media="screen">

Sass and Hugo Pipes in Hugo

Hugo uses Hugo Pipes to process Sass to CSS. You can achieve this by using Hugo’s asset processing function, resources.ToCSS, which expects a source in the assets/ directory. It takes the SCSS file as an argument. With your style definitions in a Sass file at assets/sass/style.scss, here’s how to get, process, and link your Sass in your theme’s head.html:

{{ $style := resources.Get "/sass/style.scss" | resources.ToCSS }}
<link rel="stylesheet" href="{{ $style.RelPermalink }}" media="screen">

Hugo asset processing requires extended Hugo, which you may not have by default. You can get extended Hugo from the releases page.

Configure and deploy to GitHub Pages

Before your site generator can build your site, it needs a configuration file to set some necessary parameters. Configuration files live in the site root directory. Among other settings, you can declare the name of the theme to use when building the site.

Configure Jekyll

Here’s a minimal _config.yml for Jekyll:

title: Your awesome title
description: >- # this means to ignore newlines until "baseurl:"
  Write an awesome description for your new site here. You can edit this
  line in _config.yml. It will appear in your document head meta (for
  Google search results) and in your feed.xml site description.
baseurl: "" # the subpath of your site, e.g. /blog
url: "" # the base hostname & protocol for your site, e.g. http://example.com
theme: # for gem-based themes
remote_theme: # for themes hosted on GitHub, when used with GitHub Pages

With remote_theme, any Jekyll theme hosted on GitHub can be used with sites hosted on GitHub Pages.

Jekyll has a default configuration, so any parameters added to your configuration file will override the defaults. Here are additional configuration settings.

Configure Hugo

Here’s a minimal example of Hugo’s config.yml:

baseURL: https://example.com/ # The full domain your site will live at
languageCode: en-us
title: Hugo Docs Site
theme: # theme name

Hugo makes no assumptions, so if a necessary parameter is missing, you’ll see a warning when building or serving your site. Here are all configuration settings for Hugo.

Deploy to GitHub Pages

Both generators build your site with a command.

For Jekyll, use jekyll build. See further build options here.

For Hugo, use hugo. You can run hugo help or see further build options here.

You’ll have to choose the source for your GitHub Pages site; once done, your site will update each time you push a new build. Of course, you can also automate your GitHub Pages build using GitHub Actions. Here’s one for building and deploying with Hugo, and one for building and deploying Jekyll.

Showtime!

All the substantial differences between these two generators are under the hood; all the same, let’s take a look at the finished themes, in two color variations.

Here’s Hugo:

OpenGitDocs theme for Hugo

Here’s Jekyll:

OpenGitDocs theme for Jekyll

Spiffy!

Wait who won?

🤷

Both Hugo and Jekyll have their quirks and conveniences.

From this developer’s perspective, Jekyll is a workable choice for simple sites without complicated organizational needs. If you’re looking to render some one-page posts in an available theme and host with GitHub Pages, Jekyll will get you up and running fairly quickly.

Personally, I use Hugo. I like the organizational capabilities of its Page Bundles, and it’s backed by a dedicated and conscientious team that really seems to strive to facilitate convenience for their users. This is evident in Hugo’s many functions, and handy tricks like Image Processing and Shortcodes. They seem to release new fixes and versions about as often as I make a new cup of coffee - which, depending on your use case, may be fantastic, or annoying.

If you still can’t decide, don’t worry. The OpenGitDocs documentation theme I created is available for both Hugo and Jekyll. Start with one, switch later if you want. That’s the benefit of having options.

28 Apr 00:10

A Coding Challenge Can’t Show How I Solve A Problem

Some number of people, on Twitter and elsewhere, have told me that it’s not about getting the right answer — it’s about showing the interviewer how you go about solving problems.

I’ve read a bunch of the advice on this, and the advice says things like: “Start talking. Restate the problem. Talk out an approach. Consider how much space/time it will use. And then start writing code.”

Which is of course not at all how I solve problems. I usually start with some hazy intuitive approach and start writing code. I code and think at the same time. I revise what I wrote, or even delete it. Then I go for lunch.

I come back to it, and if I’m still stuck I look in the documentation. Or Apple’s dev forums or Stack Overflow or Wikipedia. I might ask someone on my team or I might ask some friends on a Slack group. Or maybe I figure out an approach on my own after all, and then just do web searches to validate the approach.

And — this is critical — as I’m doing all of this I’m using the IDE I always use, with autocorrect, profiler, debugger, etc. All my tools. Where I’m used to the text editor and its syntax coloring and how it balances braces. Where hitting cmd-S — as I habitually do — doesn’t result in my browser prompting me to save the current page.

And — even more critical — I don’t have a 45-minute time constraint. Nobody is watching me type and judging. I’m writing code to solve a problem, rather than writing code to get a job.

There’s a huge difference between “solve this performance problem with a binary search” and “pass this test so you can feed your family.”

* * *

There’s a whole small industry to help people prepare for these tests — so it’s not like you’re getting the authentic programmer showing up. You’re getting the person who’s prepared for one of these.

Because of that, an interviewer is even less likely to learn how a candidate approaches solving a problem. Instead, they’ll learn how well the candidate prepared to make a good impression — which tells you nothing about how they’d actually solve a problem.

I think these end up favoring people with more time to prepare. It probably helps if college isn’t a decades-old memory — the closer you are to taking tests in school, the more comfortable you’ll be, and the less you’ll feel like this is an absurd exercise with no meaning.

28 Apr 00:09

West Pacific: Our Police

by Gordon Price

28 Apr 00:07

Tulsa Remote Worker Experiment

by Matt

Sarah Holder at Citylab has an interesting article on a program that paid people $10,000, a year of co-working, and a subsidized apartment to move to Tulsa, Oklahoma.

Traditionally, cities looking to spur their economies may offer incentives to attract businesses. But at a time when Americans are moving less frequently than they have in more than half a century, and the anticlimactic race to host an Amazon HQ2 soured some governments on corporate tax breaks, Tulsa is one of several locales testing out a new premise:  Pay people instead.

I love this idea, and hope that after the permanent step-up in remote work from the virus we see much more internal mobility between cities in the United States.

28 Apr 00:06

Encrypted email service ProtonMail makes Android app open-source

by Jonathan Lamont

Encrypted email service ProtonMail has made it’s Android app open-source.

The service announced the move in a blog post, noting that with the ProtonMail Android app going open-source, now all its apps are open-source. That includes the ProtonMail web app, iOS app, Bridge desktop app and all ProtonVPN apps.

“One of our guiding principles is transparency. You deserve to know who we are, how our products can and cannot protect you, and how we keep your data private. We believe this level of transparency is the only way to earn the trust of our community,” reads a line from the blog post.

Proton says that by open-sourcing the code for its app, it’s able to increase the security of the software. One way it does this is by leveraging the IT security community to search for vulnerabilities. Further, Proton believes that open source code contributes to a free internet.

The Android app was the last ProtonMail app to go open-source. The code is now available on GitHub and has passed an independent security audit from SEC Consult.

You can learn more about ProtonMail and its open-sourced apps on the company’s website.

Source: Proton Via: Android Police

The post Encrypted email service ProtonMail makes Android app open-source appeared first on MobileSyrup.

28 Apr 00:06

Apple reportedly pushing back iPhone 12 production by a month

by Patrick O'Rourke
iPhone 11 Pro Max

Apple is delaying mass production of its upcoming iPhone 12 series by a month due to issues stemming from the COVID-19 outbreak impacting its supply chain, according to a new report from The Wall Street Journal.

The report goes on to state that Apple is planning to release four new iPhone models later this year, measuring in at 5.4-inches, 6.1-inches or 6.7-inches. Apple’s new iPhone reveals are historically set for September with devices shipping by the end of the month. While the tech giant still plans to mass-produce its next iPhone line throughout the summer, the reported supply chain issues could still result in shortages. The Wall Street Journal cites Apple explicitly slashing its expected iPhone 12 output by as much as 20 percent.

If true, this wouldn’t be the first time Apple’s iPhone line has suffered from a delay. While the iPhone 8 and iPhone X were announced at the same time back in September 2017, the latter phone wasn’t released until November reportedly due to production issues. Apple’s entry-level iPhone XR suffered similar production issues related to the phone’s LCD display after being announced in September 2018 and not making it to store shelves until October.

Apple’s iPhone 12 series is rumoured to include 5G, a LiDAR sensor and a smaller notch, according to several sources. The phone’s design is also tipped to closely resemble the more recent iPad Pro models that feature flatter edges.

Apple recently released the iPhone SE (2020), an updated version of the iPhone 8 with an A13 processor.

Source: The Wall Street Journal Via: The Verge 

The post Apple reportedly pushing back iPhone 12 production by a month appeared first on MobileSyrup.

28 Apr 00:06

Foodora Canada plans to shut down on May 11, 2020

by Jonathan Lamont
Foodora

Foodora Canada announced plans to close its business in May.

The food delivery service plans to cease operations effective at the end of the day on May 11th. In a press release, Foodora says it has not been able to reach a suitable level of profitability in Canada. In part, this is due to Canada’s “strong local players” and “highly saturated market” for online food delivery.

Further, Foodora says the strong competition has intensified of late, likely due in part to increased demand as more people rely on food delivery services during the COVID-19 pandemic.

“We’re faced with strong competition in the Canadian market, and operate a business that requires a high volume of transactions to turn a profit. We’ve been unable to get to a position which would allow us to continue to operate without having to continually absorb losses,” said David Albert, managing director of Foodora Canada.

Foodora, which has operated in Canada for the past five years, offered service in ten cities across the country. Additionally, Foodora has more than 3,000 partner restaurants across the country.

Employees will continue to be paid as stipulated in their contractual agreements, while Foodora’s rider community has also been given notice of termination.

Delivery Hero owns Foodora and maintains number0one competitive market positions in 36 of 44 countries across Europe, Latin America, Asia-PAcific, the Middle East and North Africa. Additionally, Delivery Hero operates its own delivery service primarily in over 530 cities around the globe. Delivery Hero is headquartered in Berlin, Germany.

The post Foodora Canada plans to shut down on May 11, 2020 appeared first on MobileSyrup.

28 Apr 00:06

B.C. government pledges funding for faster internet in rural and remote communities

by Bradly Shankar
Cell tower

The B.C. government has announced that it will provide “targeted funding” for faster internet in rural, remote and Indigenous communities across the province.

This is part of the province’s $50 million CAD Connecting British Columbia program that’s dedicated to improving infrastructure province-wide. Specifically, this is intended to help carriers bolster their networks as they face increased strain due to the large number of people staying at home during the COVID-19 pandemic.

“People working from home, students learning remotely and families practising physical distancing all need to know they can depend on internet access during this public-health emergency,” said Anne Kang, Minister of Citizens’ Services, in a statement. “Responding to the pandemic requires the best from all of us. Our communities need reliable internet access right now, and this new fund will get projects completed quickly.”

Internet service providers in B.C. can now apply for grants of up to $50,000 — or 90 percent of their expenses — to cover the cost of equipment that’s needed to implement these changes to infrastructure.

Via: Victoria News

The post B.C. government pledges funding for faster internet in rural and remote communities appeared first on MobileSyrup.

27 Apr 00:38

Mac Migration Pain

What happened was, my backpack got stolen with my work and personal Macs inside. The work machine migrated effortlessly but I just finished up multiple days of anguish getting the personal machine going and thought I’d blog it while it was fresh in my mind. This might be helpful to others, either now or maybe later at the end of a Web search. But hey, I’m typing this ongoing fragment on it, so the story had a happy ending. And to compensate for its length and sadness, I’ll give it a happy beginning; two, in fact.

Happy corporate

The IT guys were excellent, they’re in quarantine mode too but somehow arranged on two hours’ notice that I could drop by the spookily-empty office and there it was on a shelf outside the IT room. I’d backed everything that mattered up straight to S3, it restored fast and painlessly, and all I had to do was re-install a couple of apps.

As I’ve said before, Arq worked well. But as you will soon learn, there are pointy corners and boy did I ever get bruises.

Happy Ukelele

When I’m writing text for humans to read, I like to use typographical quotes (“foo” rather than "foo"), apostrophes (it’s not it's), and ellipses (… is one character not three). These are included in every font you’re likely to have, so why not use them? The default Mac keystrokes to get them are awkward and counter-intuitive.

Since (blush) I hadn’t backed up the keyboard layout I’d been running for years, I went and got Ukelele, a nifty little software package that makes it absurdly easy to modify your Mac keyboard layout. Here’s a screenshot.

Ukelele

I produced a layout called “Typographic Quotes” which remaps four keys:

  1. Opt-L is “

  2. Opt-; is ”

  3. Opt-' is ’

  4. Opt-. is …

Anyhow, here it is (with a nice icon even), download and expand it and go looking on the Web for instructions how to add it on your version of MacOS, it’s not rocket science.

Another nice thing about Ukelele: It’s ships with maybe the best How-To tutorial I’ve ever encountered in my lifetime of reading software how-tos.

Who am I?

When I set up the new machine, it asked me if I wanted to transfer from an older one and I said no; so it unilaterally decided that my Mac username (and home directory) would be “/Users/timothybray” as opposed to the previous “twbray”. Sort of dumb and clumsy but you wouldn’t think it’d hurt that much. Hah.

Arq hell

My backup, made with a copy of Arq 5 that I paid for, is on our Synology NAS. I had to fumble around a bit to remember how to turn it into an AFP mount, then I went to arqbackup.com and hit the first Download button I saw. Once I fired up Arq I started, at a moderate pace and step by step, to be driven mad. Because Arq flatly refused to see the network filesystem. I tried endless tricks and helpful-looking suggestions I found on the Web and got nowhere.

Here’s a clue: I’d typed in my license code and Arq had said “Not an Arq 6 license code”, which baffled me, so I went ahead in free-trial mode to get the restore running. Wait a minute… all the backups were with Arq 5. OK then. So I went and dug up an Arq 5 download.

It could see the NAS mounts just fine but things didn’t get better. The backup directory looked like /Volumes/home/tim/Backup and there were two different save-sets there, a special-purpose one-off and my “real” backup, charmingly named “C1E0FFBA-D68B-4B51-870A-F817FC0DF092”. So I pointed Arq at C1E0… and hit the “Restore” button and it… did nothing. Spun a pinwheel for like a half-second and stopped and left nothing clickable on the screen. Nor was there any useful information in any log I could find.

Rumor has it that I impressed my children with the depth and expressiveness of my self-pitying moans.

So I went poking around to look at the backup save-set that Arq was refusing to read, and noticed that its modification date was toeday. Suddenly it seemed perfectly likely that I’d somehow corrupted it and blackness was beginning to close in at edges of my vision.

[Yes, dear reader, the Synology in turn backs itself up to S3 but I’d never actually tried to use that so my faith that it’d even work was only modest.]

I finally got it to go. Protip: Don’t point Arq at your saveset, point it at the directory that contains them. Why, of course!

So I launched the restore and, being a nervous kinda guy, kept an eye on the progress window. Which started to emit a series of messages about “File already exists”. Huh? Somehow I’d fat-fingeredly started two Arq restores in parallel and they were racing to put everything back. So I stopped one but it still kept hitting collisions for a long, long time. Anyhow it seemed to work.

Now, it turns out that when Arq started up it pointed out it was going to restore to /Users/twbray where they’d come from, so I pointed it at “timothybray” instead.

Lightroom hell

I really care about my pictures a lot. So I fired up Lightroom Classic, it grunted something about problems wth my catalog, and limped up in pile-of-rubble condition. I have a directory called “Current” that all the pictures I’m working on live in before they get filed away into the long-term date hierarchy, and it seemed to contain hundreds of pictures from years back, and my date tree didn’t have anything after September 2017, and all the pictures in Current that I could see were from my phone, none from any of the actual camera cameras.

I tried restoring one of the catalog backups that Lightroom helpfuly generates but that wasn’t helpful.

I may once again have made more loud self-pitying noises than strictly necessary. I poked around, trying to understand what was where and absolutely couldn’t make any sense of things. Scrolling back and forth suggested the Fuji camera photos were there, but Lightroom couldn’t locate them.

Eventually it dawned on me that Lightroom expects you to have your photos splashed around across lots of disks and so in its catalog it puts absolute file pathnames for everything, nothing relative. So the photos were all under “timothybray” and the catalog pointers were all into “twbray”. So how the freaking hell did Lightroom find all the Pixel photos!?!?.

Since I’d made probably-destructive changes while thrashing around, I just Arq-restored everything to /Users/twbray and left it there and now Lightroom’s happy.

Was my trail of tears over? If only.

Blogging hell

The software that generates the deathless HTML you are now reading was written in 2002, a much simpler and more innocent time, in Perl; in fact it comprises one file named ong.pl which has 2,796 lines of pure software beauty, let me tell ya.

It uses Mysql to keep track of what’s where, and Perl connects to databases via “DBD” packages, in this case DBD::mysql. Which comes all ready to install courtesy of CPAN, Perl’s nice package manager.

You get all this built right into MacOS. Isn’t that great? No, it isn’t, because it plays by Apple rules which means that relies on something called @rpath. Which can go burn in hell, I say. The effect of it being there is that things that load dynamically refuse to load from anywhere but /usr/lib, and your modern MacOS versions make it completely physically impossible to update anything on that filesystem. So in effect it’s impossible to install DBD::mysql using the system-provided standard CPAN. There’s this thing called install_name_tool that looks like it should in theory help, but the instructions include poking around inside object libraries and life’s too short.

SQL hell

You might well ask “Why on earth are you using mysql anyhow in 2020? Especially when it’s a dinky blogging system with a few thousand records?” My only answer is “It was 2002” and it’s not a good one. So I thought maybe I could wire in another engine, and the obvious candidate was SQLite, software that I’ve had nothing but good experience with: Fast, stripped-down, robust, and simple.

DBD::SQLite installed with no trouble, so I handcrafted my database init script to fix some trifles SQLite complained about, and fired up the blogging system. Boom crash splat fart bleed. It turns out that the language mysql calls “SQL” is only very loosely related to the dialect spoken by SQLite. Which one’s right? I don’t fucking care, OK? But it would have been a major system rewrite.

Docker hell

So I took pathetically to Twitter and whined at the world for advice. The most common suggestion was Docker; just make an image with mysql and Perl and the other pieces and let ’er rip. Now I’ve only ever fooled around unseriously with Docker and quickly learned that I was just not up to this challenge. Yep, I could make an image with Perl and DBD::mysql, and I could make one with mysql, but could I get one to talk to each other? Could I even connect to the Dockerized mysql server? I could not, and all the helpful notes around the Net assume you already have an M.Sc. in Dockerology.

So this particular hell was really my fault as much as any piece of software’s.

John Siracusa

Deus ex machina!

Among the empathetic chatter I got back on Twitter, exactly one person said “I have DBD::mysql working on Catalina.” That was John Siracusa. I instantly wrote back “how?” and the answer was simple: Don’t use the Apple flavors of perl, build your own from scratch and use that. All of a sudden everything got very easy.

  1. Download the source from Perl.org. Since I’d filled up /usr/local with Homebrew stuff, I worked out of /usr/local/tim.

  2. ./Configure -des -Dprefix=/usr/local/tim

  3. make

  4. make test

  5. make install

  6. PATH=/usr/local/tim/bin:$PATH

  7. cpan install DBI

  8. cpan install DBD::mysql

Then everything just worked.

Now, I have to say that watching those Configure and make commands in action is a humbling experience. In 2020 it’s fashionable to diss Perl, but watching endless screens-full of i-dotting and t-crossing and meticulous testing flow past reminded me that those of us doing software engineering and craftsmanship need a healthy respect for the past.

Was I done?

Of course not. A whole bunch of other bits and pieces needed to be installed from Homebrew and CPAN and Ruby Gems and so on and so forth, but that’s not a thing that leaves you frustrated or bad.

Boy, next time I do this, it’ll go better. Rght?

27 Apr 00:36

On Wikis, Blogs and Note Taking

by Ton Zijlstra

Yesterday I participated in, or more accurately listened in on, a IndieWeb conversation on wikis and their relationship to blogs (session notes).

I didn’t feel like saying much so kept quiet, other than at the start during a (too long) intro round where I described how I’ve looked at and used wiki personally in the past. That past is almost as long as this blog is old. Blogs and wikis were to me the original social software tools.

  • Between 2004 and 2010 I kept a wiki as the main screen on my desktop, sort of like how I used The Brain in years before that. In it I kept conversation notes, kept track of who’s who in the projects I worked on etc. This after a gap in turn got replaced by Evernote in 2012
  • Between 2004 and 2013 I had a public wiki alongside this blog (first WakkaWiki, then WikkaWiki). In those years at one or two points I recreated it from scratch after particular intensive waves of automated spam and vandalism
  • Between 2004 and 2010 I had a wiki covering all the BlogWalk Salons I co-organised from 2004-2008
  • I had a script that let me crosspost from this blog to the wiki alongside it, so I could potentially rework it there. I don’t think that happened much really.
  • At one point I glued blogs, wiki and forum software together as a ‘Patchwork Portal‘ for a group I worked with. Elmine and presented about this together on BlogTalk Reloaded in 2006, showing the co-evolution of a budding community of practice and the patchwork portal as the group’s toolset. Afterwards it was used for a while in a ‘wiki on a stick’ project for education material by one of the group’s members.
  • Two years ago I re-added a wiki style section of sorts to this blog. As I’m the only one editing anyway, I simply use WordPress pages, as when I’m logged in everything has an edit button already. The purpose is to have a place for more static content, so I can refer to notions or overviews more easily, and don’t need to provide people with a range of various blogposts and let them dig out my meaning by themselves. In practice it is a rather empty wiki, consisting mostly of lists of blogposts, much less of content. A plus is that Webmentions work on my pages too, so bidirectional links between my and someone else’s blog and my wiki are easy.
  • With clients and colleagues over the years I’ve used Atlassian as a collaborative tool, and once created a wiki for a client that contained their organisation’s glossary. Current items were not editable, but showed sections directly below that which were. Colleagues could add remarks, examples and propose new terms, and from that periodically the glossary would be changed.

Stock versus flow, gardening and streams
Neil Mather, who has a really intriguing wiki as commonplace book since last fall, mentioned he writes ‘stream first’. This stock (wiki) and flow (blog) perspective is an important one in personal knowledge management. Zettelkasten tools and e.g. Tiddlywiki focus on singular thoughts, crumbs of content as building block, and as such fall somewhere in between that stock and flow notion, as blogging is often a river of these crumbs (bookmarks, likes, an image, a quote etc.) Others mentioned that they blogged as a result of working in their wiki, so the flow originated in the stock. This likely fits when blog posts are articles more than short posts. One of the participants said his blog used to show the things from his wiki he marked as public (which is the flip side of how I used to push blog posts to the wiki if they were marked ‘wikify’).
Another participant mentioned she thinks of blogs as having a ‘first published’ date, and wiki items a ‘last edited’ date. This was a useful remark to me, as that last edited date in combination with e.g. tags or topics, provides a good way to figure out where gardening might be in order.
Ultimately blogs and wikis are not either stock or flow to me but can do both. Wikis also create streams, through recent changes feeds etc. Over the years I had many RSS feeds in my reader alerting me to changes in wikis. I feel both hemmed in by how my blog in its setup puts flow above stock, and how a wiki assumes stock more than flow. But that can all be altered. In the end it’s all just a database, putting different emphasis on different pivots for navigation and exploration.

Capturing crumbs, Zettelkasten
I often struggle with the assumed path of small elements to slightly more reworked content to articles. It smacks of the DIKW pyramid which has no theoretical or practical merit in my eyes. Starting from small crumbs doesn’t work for me as most thoughts are not crumbs but rather like Gestalts. Not that stuff is born from my mind as a fully grown and armed Athena, but notes, ideas and thoughts are mostly not a single thing but a constellation of notions, examples, existing connections and intuited connections. In those constellations, the connections and relations are a key piece for me to express. In wiki those connections are links, but while still key, they are less tangible, not treated as actual content and not annotated. Teasing out the crumbs of such a constellation routinely constitutes a lot of overhead I feel, and to me the primary interest is in those small or big constellations, not the crumbs. The only exception to this is having a way of visualising links between crumbs, based on how wiki pages link to each other, because such visualisations may point to novel constellations for me, emerging from the collection and jumble of stuff in the wiki. That I think is powerful.

Personal and public material
During the conversation I realised that I don’t really have a clear mental image of my wiki section. I refer to it as my personal wiki, but my imagined readership does not include me and only consists of ‘others’. I think that is precisely what feels off with it.
I run a webserver on my laptop, and on it I have a locally hosted blog where very infrequently I write some personal stuff (e.g. I kept a log there in the final weeks of my father’s life) or stream of consciousness style stuff. In my still never meaningfully acted upon notion of leaving Evernote a personal blog/wiki combo for note taking, bookmarking etc might be useful. Also for logging things. One of the remarks that got my interest was the notion of starting a daily note in which over the course of the day you log stuff, and that is then available to later mine for additional expansion, linking and branching off more wiki-items.

A question that came up for me, musing about the conversation is what it is I am trying to automate or reduce friction for? If I am trying to automate curation (getting from crumbs to articles automagically) then that would be undesirable. Only I should curate, as it is my learning and agency that is involved. Having sensemaking aids that surface patterns, visualise links etc would be very helpful. Also in terms of timelines, and in terms of shifting vocabulary (tags) for similar content.

First follow-ups

  • I think I need to return to my 2005 thinking about information strategies, specifically at the collecting, filtering stage and the actions that result from it. and look again at how my blog and wiki can play a bigger role for currently underveloped steps.
  • Playing more purposefully with how I tie the local blog on my laptop to the publlic one sounds like a good experiment.
  • Using logging as a starting point for personal notetaking is an easy experiment to start (I see various other obvious starting points, such as bookmarks or conversations that play that role in my Evernotes currently). Logging also is a good notion for things like the garden and other stuff around the home. I remember how my grandmother kept daily notes about various things, groceries bought, deliveries received, harvest brought in. Her cupboard full of notebooks as a corpus likely would have been a socio-economic research treasure
27 Apr 00:35

Such an important point. Another illustration of precisely who has ‘taken back control’. twitter.com/arusbridger/st…

by mrjamesob
mkalus shared this story from mrjamesob on Twitter.

Such an important point. Another illustration of precisely who has ‘taken back control’. twitter.com/arusbridger/st…

"Tory grandee” used to mean distinguished former MP or Peer, or backwoodsman. Now it means major donor or billionaire telling the government what to do. pic.twitter.com/Ab3WUndMj6





2277 likes, 882 retweets



2125 likes, 724 retweets
27 Apr 00:34

Germany's Covid-19 expert: 'For many, I'm the evil guy crippling the economy' | World news

mkalus shared this story from The Guardian.

Christian Drosten, who directs the Institute of Virology at the Charité Hospital in Berlin, was one of those who identified the Sars virus in 2003. As the head of the German public health institute’s reference lab on coronaviruses, he has become the government’s go-to expert on the related virus causing the current pandemic.

In an exclusive interview, Drosten admits he fears a second deadly wave of the virus. He explains why Angela Merkel has an advantage over other world leaders – and why the “prevention paradox” keeps him awake at night.

Q: Germany will start to lift its lockdown gradually from Monday. What happens next?
A:
At the moment, we are seeing half-empty ICUs in Germany. This is because we started diagnostics early and on a broad scale, and we stopped the epidemic – that is, we brought the reproduction number [a key measure of the spread of the virus] below 1. Now, what I call the “prevention paradox” has set in. People are claiming we over-reacted, there is political and economic pressure to return to normal. The federal plan is to lift lockdown slightly, but because the German states, or Länder, set their own rules, I fear we’re going to see a lot of creativity in the interpretation of that plan. I worry that the reproduction number will start to climb again, and we will have a second wave.

Q: If the lockdown were kept in place longer, could the disease be eradicated?
A:
There is a group of modellers in Germany who suggest that by prolonging lockdown here for another few weeks, we could really suppress virus circulation to a considerable degree – bringing the reproduction number below 0.2. I tend to support them but I haven’t completely made up my mind. The reproduction number is just an average, an indication. It doesn’t tell you about pockets of high prevalence such as senior citizens’ homes, where it will take longer to eradicate the disease, and from where we could see a rapid resurgence even if lockdown were extended.

Q: If there were such a resurgence, could it be contained?
A:
Yes, but it can’t happen based on human contact-tracing alone. We now have evidence that almost half of infection events happen before the person passing on the infection develops symptoms – and people are infectious starting two days prior to that. That means that human contact-tracers working with patients to identify those they’ve been exposed to are in a race against time. They need help to catch all those potentially exposed as quickly as possible – and that will require electronic contact-tracing.

Q: How close we are to achieving herd immunity?
A:
To achieve herd immunity we need 60-70% of the population to carry antibodies to the virus. The results of antibody tests suggest that in Europe and the US, in general, we are in the low single digits, but the tests are not reliable – all of them have problems with false positives – and herd immunity is also not the whole story. It assumes complete mixing of the population, but there are reasons – in part to do with the social networks people form – why the whole population may not be available for infection at any given time. Networks shift, and new people are exposed to the virus. Such effects can drive waves of infection. Another factor that could impact herd immunity is whether other coronaviruses – those that cause the common cold, for example – offer protection to this one. We don’t know, but it’s possible.

Q: Should all countries be testing everybody?
A:
I’m not sure. Even in Germany, with our huge testing capacity, and most of it directed to people reporting symptoms, we have not had a positivity rate above 8%. So I think targeted testing might be best, for people who are really vulnerable – staff in hospitals and care homes, for example. This is not fully in place even in Germany, though we’re moving towards it. The other target should be patients in the first week of symptoms, especially elderly patients who tend to come to hospital too late at the moment – when their lips are already blue and they need intubation. And we need some kind of sentinel surveillance system, to sample the population regularly and follow the development of the reproduction number.

Q: What is known about the seasonality of the virus?
A:
Not a lot. The Harvard modelling group led by Marc Lipsitch has suggested that transmission might slow over the summer, but that it will be a small effect. I don’t have better data.

Q: Can we say for sure that the pandemic started in China?
I think so. On the other hand, I don’t assume that it started at the food market in Wuhan. It is more likely to have started where the animal – the intermediate host – was bred.

Q: What do we know about that intermediate host – is it the “poor pangolin”, as it’s come to be known?
A:
I don’t see any reason to assume that the virus passed through pangolins on its way to humans. There is an interesting piece of information from the old Sars literature. That virus was found in civet cats, but also in raccoon dogs – something the media overlooked. Raccoon dogs are a massive industry in China, where they are bred on farms and caught in the wild for their fur. If somebody gave me a few hundred thousand bucks and free access to China to find the source of the virus, I would look in places where raccoon dogs are bred.

Q: Will it be useful to identify patient zero – the first human to have been infected with this virus?
A
: No. Patient zero is almost certain to have acquired a virus that is very similar to some of the first sequenced viruses, so it wouldn’t help us solve our current problem. I don’t think you could even argue that it would help us prevent future coronavirus pandemics, because humanity will be immune to the next Sars-related coronavirus, having been exposed to this one. Other coronaviruses could pose a threat – a prime candidate is the Middle East respiratory syndrome (Mers) virus – but to understand that threat we have to study how Mers viruses are evolving in camels in the Middle East.

Q: Are human activities responsible for the spillover of coronaviruses from animals into people?
Coronaviruses are prone to switch hosts when there is opportunity, and we create such opportunities through our non-natural use of animals – livestock. Livestock animals are exposed to wildlife, they are kept in large groups that can amplify the virus, and humans have intense contact with them – for example through the consumption of meat – so they certainly represent a possible trajectory of emergence for coronaviruses. Camels count as livestock in the Middle East, and they are the host of the Mers virus as well as human coronavirus 229E – which is one cause of the common cold – while cattle were the original hosts for coronavirus OC43, which is another.

Q: Flu has always been thought to pose the greatest pandemic risk. Is that still the case?
A:
Certainly, but we can’t rule out another coronavirus pandemic. After the first Ebola outbreak, in 1976, people thought it would never come back again, but it took less than 20 years to do so.

Q: Is all the science being done around this coronavirus good science?
A:
No! Early on, in February, there were many interesting preprints [scientific papers that have not yet been peer-reviewed] around. Now you can read through 50 before you find something that’s actually solid and interesting. A lot of research resources are being wasted.

Q: Angela Merkel has been praised for her leadership during this crisis. What makes her a good leader?
She’s extremely well-informed. It helps that she’s a scientist and can handle numbers. But I think it mainly comes down to her character – her thoughtfulness and ability to reassure. Maybe one of the distinguishing features of a good leader is that they are not using this present situation as a political opportunity. They know how counterproductive that would be.

Q: From where you stand, how is the UK handling the situation?
A:
It’s clear that testing was implemented a little bit too late in the UK. Public Health England was in a position to diagnose the disease very early on – we worked with them to make the diagnostic test – but rollout in Germany was driven in part by market forces, which made it fast, and that wasn’t the case in the UK. Now, though, I have the impression that the UK is really gaining momentum in this regard, and that it is coordinating testing efforts better than Germany.

Q: What keeps you awake at night?
A:
In Germany, people see that the hospitals are not overwhelmed, and they don’t understand why their shops have to shut. They only look at what’s happening here, not at the situation in, say, New York or Spain. This is the prevention paradox, and for many Germans I’m the evil guy who is crippling the economy. I get death threats, which I pass on to the police. More worrying to me are the other emails, the ones from people who say they have three kids and they’re worried about the future. It’s not my fault, but those ones keep me awake at night.

26 Apr 13:49

Running BigBlueButton – Week One

by Martin

BBB load on 16 vCPUsSo here we go: In the past one and a half weeks, I’ve set up and then started running a BigBlueButton video conferencing server ‘pro bono publico’ in my quality time for a faculty of a non-technical institution. It’s been an exciting week for many reasons. The main one: Like many projects, the number of people who wanted to use it grew quite quickly.

Initially, I started working on a solution that would enable 3 to 4 lecturers throughout the week for some sessions. This quickly grew into 10 lecturers with some sessions in parallel. In the end, 16 lecturers showed up in the first orga call. A quick poll indicated the requirement for a system that could handle up to 5 parallel sessions with 60 people. On top, video between all people was seen as essential. From earlier experiments I knew that a 4-vCore VM could (only) handle a session with up to 20 people with the same number of videos between them. O.k., challenge accepted.

N:N Video Creates a Lot of Streams

When looking at the server load of previous calls and also from a logical point of view, I got the impression that the main capacity driver is not the number of concurrent sessions but the number of people per session and the requirement that everybody should see the video of everyone else in each session. So running 3 sessions with 20 people inside requires 3 * 19 * 20 = 1140 concurrent video streams while 5 sessions with 20 + 10 + 10 + 10 + 10 people only requires 740 video streams. In terms of bandwidth and CPU capacity, that is a big difference!

I Need Those 16 CPUs

So based on these thoughts I calculated the size of the VM I would need and came up with a minimum of 8 vCPUs for my VM. To be on the save side, I doubled the number and scaled-up my setup to a 16 dedicated vCPU VM in Hetzner’s data center in Nürnberg and 1 Gbit/s network connectivity. That’s quite a bit of a setup and costs 166 Euros a month.

Monday Is Coming

CPU and network loadO.k., so Monday came and I held my breath. And indeed, by 10 a.m. I had 5 sessions running in parallel with around 50-60 people in total, one with 20 people and the others with 8-10 people each. Most but not all people had their video activated. Those who didn’t had old devices and were struggling a bit. More about that in another post. I’ve put together a rough graph that shows the average CPU load over 16 CPU cores in percent in the top graph from 8 am to 10 pm. The bottom graph shows the network throughput in Mbit/s. The peak sustained incoming datarate was somewhere around 14 Mbit/s, while the sustained outgoing data rate was around 120 Mbit/s.

As most people left the video quality at “medium” when activating their web cam, each uplink video generated 0,35 Mbit/s of traffic, i.e. 2,85 videos per Mbit/s of inbound traffic. Therefore, the 14 Mbit/s inbound traffic was generated by roughly 40 sources. As there were more users, some without video, this is only a rough estimation, but pretty close to reality.

Applying the same ratio to the 120 Mbit/s in the downlink direction results in around 342 simultaneously outgoing video streams. That’s lower than I expected in my theoretical calculation above.

Session Experience

As my setup can still be considered rather small, I knew all lecturers and joined some of their sessions over the week to see how things were going. For those people with good hardware, including me, the experience was excellent. Two hour sessions worked without interruptions, and sound and video quality was excellent. When joining a call, it takes only a few seconds to recognize which people on the call had old notebooks, smartphones or tablets as their videos were not as clear and fluid than those from people with better hardware. Also, some people seemed to have connectivity issues. I’m not exactly sure why, because even in calls with 20 people and 15 videos, the downlink data rate never exceeded 3.5 Mbit/s and even with the video camera switched on, the uplink data rate is a relatively low 0.35 Mbit/s. Even low end DSL lines should be able to handle this. Unless, of course, other people in the household use the network simultaneously…

So based on these numbers and my usage scenario, i.e. number of simultaneous sessions with sizes between 10 and 20 people, with video switched on between most of them, my €160 per month server can likely handle 80 people simultaneously. While that is enough for my purposes with a good margin, the number is far lower than the 200 people given for an 8 core setup in the BBB FAQs. The difference, as far as I can tell, is because the FAQ number assumes a far lower number of simultaneous video flows. That said, I am very happy about the outcome of week one. The server I have setup-up is running perfectly, 60 teachers and students are back working and studying and the gloomy faces when the project started have given way to smiles and fun during calls.

Old Hardware Is A Challenge

While I don’t want to end this post with a negative thought, there is one thing I need to mention though: Around 10% of the people who wanted to use the system have hopelessly old and under-powered hardware. Most of them didn’t have a computer but only an old smartphone or tablet that can’t handle the videos very well. Most of these old devices just drop out of the call after a short time. Videos transmission and reception can be switched-off in the BBB client so audio is not a problem for most of these people. However, seeing the video of at least the lecturer is essential for them so that is not much help to them. So we are working on this and hope that in the next couple of days we can enable as many of them as possible.

26 Apr 13:49

Telegram hits 400 million monthly users, teases secure video calling feature

by Aisha Malik

Instant messaging app Telegram has reached 400 million monthly active users, up from the 300 million reported in October.

Telegram says it is seeing 1.5 million new users everyday amid the COVID-19 pandemic. The platform says that it is developing a secure video calling feature to rival Zoom and Houseparty.

The platform largely competes with WhatsApp, which has reached over two billion users worldwide. It’s interesting to note that Telegram usually sees a surge in users when WhatsApp is down.

Telegram has amassed some loyal users due to its desktop app, which doesn’t require users’ phones to be connected to function, and its other features like cloud storage and folders.

However, these features have allowed the platform to be misused by some people who use it to distribute and download pirated versions of songs, movies and apps. Telegram is still largely dealing with a piracy issue.

It’s not surprising that Telegram has seen an increase in users since pretty much every other platform also has during the pandemic. It will be interesting to see its new secure group video calling feature once it rolls out, and to see if the service will actually be able to compete with Zoom.

Source: TechCrunch 

The post Telegram hits 400 million monthly users, teases secure video calling feature appeared first on MobileSyrup.

26 Apr 03:39

This sounds like a good conversation to have. I...

by Ton Zijlstra
Rolandt

jj

This sounds like a good conversation to have. I have been and am experimenting with the blog / wiki combination basically since the start of this blog. And I dislike the disconnect between that and my note taking system, as well as the always manual creation of wiki content (though I used to have a script that pushed blog content to the wiki for further evolution way back in 2004)

RSVPed Attending Gardens and Streams: Wikis, blogs, and UI -- a pop up IndieWebCamp session
Join the Zoom call: link to come This is an online only event. We will provide a Zoom video conference link 30 minutes before the session here and in the IndieWeb chat. There has been some sporadic conversation about doing impromptu IndieWebCamp sessions and thus far we've yet to organize one. Given...
26 Apr 03:21

Still Going

I keep thinking I’m going to get back in the habit of writing here regularly, but something eats the time and attention I think I have set aside for writing (be it daily or every few days - daily is better for me as it sets a habit in place for me better then in a non-daily cadence).

Personal Update

I’m alive and well, which is good in these times. It is something people keep asking and I’ve rather sucked at responding to many direct queries, mostly because of time and attention / focus. My health is good (knock wood). Work is going and quite busy and bordering on hectic with many projects running concurrently and too many needed tabs open all with the same favicon (really people, this shouldn’t be a thing still in 2020, it should have been fixed in the early 2000s and was fixed and long past time to go back to that).

Other Updates

I’m back using an RSS reader somewhat regularly now that NetNewswire is a re-built from scratch thing again. One of the things I’m back really enjoying are the weekly updates from people I know (and miss conversations with). Finding what they are reading, working on, thinking about, eating, watching, etc. is something that quite often surfaces things I may have interest in. One of the things I’m continually lacking of late is good conversations that can go deep on many of the subjects I’m digging in and around. Between podcasts and personal blogs (quite often it is the weekly update) these sort of suffice or have become one-sided (as far as the party at the other end is concerned) conversations.

26 Apr 03:20

Archiving Catherine's Instagram

by peter@rukavina.net (Peter Rukavina)

I’ve been working on a way of keeping the photos and videos in Catherine’s Instagram online without Instagram, and finally came up with a solution today, which you can browse to your heart’s content at:

http://eramosa.net/instagram

Catherine posted to Instagram for four years, from December of 2015 to December of 2019. She started and ended in the Christmas season, and so both her first video and her last video were of the Christmas village that gets set up in our dining room every year.

To generate this static HTML version, I started with a data dump requested from Instagram, and wrote a short PHP script to process the media.json file found therein, which contains an index of every photo, video and story in the archive.

The script first loads in the JSON file and converts it to a PHP object:

$media_file = file_get_contents("media.json");
$media = json_decode($media_file);

Then it generates a thumbnail image, using ImageMagick, for every image, and writes the HTML needed to render that image, and link to the original:

fwrite($fp, "

Photos

\n"); foreach($media->photos as $key => $photo) { $html = "" . htmlentities($photo->caption, ENT_QUOTES) . " "; fwrite($fp, $html . "\n"); $parts = explode("/", $photo->path); $subdir = $parts[1]; if (!file_exists("thumbnails/photos/$subdir")) { system("mkdir thumbnails/photos/$subdir"); } if (!file_exists("thumbnails/" . $photo->path)) { system("convert " . $photo->path . " -resize 200x200 thumbnails/" . $photo->path); } }

It goes through the same process for the videos and stories, the result being one big index.html that shows everything. A sort of anti-Instagram, UX-wise.

I uploaded this file, along with the generated thumbnails and the photos, videos and stories directories, to an Amazon S3 bucket, wired up Catherine’s eramosa.net domain name to the bucket and, presto, static website.

Screen shot of Catherine's static Instagram archive.

26 Apr 03:19

Where It Began: the Chilco Cul-de-Sac

by Gordon Price

This is the view that inspires me.

I not only appreciate the privilege, I appreciate the irony.  I’m an urbanist and advocate of Vancouverism – for livable high density.  And I’m living on a cul-de-sac – symbol of suburbia.  But it’s one with a lot of through traffic – in the case above, for three different animals and their vehicles, just not the one with an engine. (Thursday, April 23 at Chilco and Robson.)

I also appreciate the history: this is where the first permanent traffic-calming strategy was installed in a major North America City.  There should be a plaque.

I tell the story here:

Fifty years later, it’s in ruins. You can see the video here: Chilco cul de sac.

Once construction of the water-line replacement is finished, a rebuilt Chilco cul-de-sac will better reflect the world we live in now, one designed more for the variety of animals and vehicles you see above.   Which, if I remember, is close to the vision those urban pioneers of the Seventies had hoped for.


View attached file (27.2 MB, video/quicktime)