Shared posts

05 Sep 14:53

Earn Their Attention First

by Richard Millington

The most common projects we turn down are those from people who want to create an online community but don’t yet have an existing audience.

It’s very hard to build a community if you haven’t yet earned the attention of an existing audience.

You don’t need a huge audience, but you do need a group of a few hundred people who will happily listen to what you have to say.

If you don’t have an existing audience, the alternatives are to buy an audience’s attention with social ads (expect to spend around $10 per each active member) or create a community concept so risky and so daring people will naturally spread the word.

Generally speaking, it’s best to earn the audience’s attention first. Follow the CHIP process. Create content, host events, interview people, and participate in existing groups.

You will always find it’s easier to start a new community if you’ve participated and supported related communities in the past.

03 Sep 03:00

Cities need ‘simple solution’ for dealing with dog poop - The Globe and Mail

mkalus shared this story :
I’m wondering, if the bags are cut open and then “flushed down the sewer”, could’t you automate the process? The plastic bags would float, so mechanically cutting them, then running them through water and scooping out the bags? Though I am sure I am missing something here.

The unleashed-dog area next to Vancouver’s Olympic Village is a hub of busyness on a warm Labour Day weekend, owners watching their pets sprint and jump against a backdrop of downtown condo towers.

The red bins next to the park get steady business, as those owners drop off what have become ubiquitous accessories to dog ownership in the city: plastic bags filled with their dogs’ biological waste.

Most of the owners don’t know about the labour-intensive process that follows those plastic bags of dog poop. The City of Vancouver pays contractors to carry out the unlovely job of slicing the bags open, mixing the feces with water and then delivering the liquid result to the region’s sewer-treatment system – after throwing out the bags, which are not biodegradable no matter what some manufacturers claim.

“Ugh,” is the response from dog owner Justin Lee when he hears about that procedure, after depositing a bag on behalf of his British bulldog, Bubba, at the park.

“I didn’t know that. I didn’t realize those bags aren’t compostable.”

But anyone working in the business of collecting solid waste – the region’s cities and the Metro Vancouver regional district – does.

And now they are also hunting for a better way to handle the thousands of tonnes of dog waste that are deposited throughout the Lower Mainland every year, on the ground, in regular garbage bins and all kinds of other places that are problematic.

That’s what prompted the City of Vancouver to issue a call recently (deadline, mid-September) for anyone with suggestions for better methods, from worm composting to regular composting to a new program to persuade dog owners to take their bags home, cut them open themselves and dump the contents into their own toilets. Or any other innovative method.

“We’re definitely looking for what are the other solutions out there,” says Jon McDermott, who oversees the city’s red-bin dog waste pilot project.

At the moment, there are red bins in just eight parks in Vancouver, the result of a test that started three years ago to see if dog owners would use them instead of dumping dog waste into the regular garbage stream.

Story continues below advertisement

It turns out they will use them, with the result that the city now collects 25 tonnes a year of poop-filled bags from the red bins – paying contractor Scooby’s Dog Waste Removal Service $40,000 a year to dispose of them.

But the city, prompted by a motion from Councillor Sarah Kirby-Yung, is looking at extending some kind of dog waste program to all of its 300-plus parks, which would mean a huge expansion in poop tonnage from the city’s estimated 32,000 to 55,000 dogs. Bag slicing is too primitive and expensive for that, Mr. McDermott says.

The city’s initiative is being welcomed by others dealing with the same issue.

“I’m really hoping the city will come up with innovative solutions,” says Karen Storry, Metro Vancouver’s senior project engineer in the solid waste division, which ends up dealing with the byproducts of 350,000 dogs directly in regional parks and indirectly in the sewage system. “This may be an opportunity for the right entrepreneur to come up with something.”

Dog waste is a major issue in the region. Feces left on the street gets washed into storm drains and eventually end up in streams, rivers and the ocean. It’s also a problem in landfills. It’s a “concentrated producer of greenhouse gases when it breaks down,” the city’s call-for-information document notes.

It can’t go into city green bins because the region’s commercial composters have been adamant that it’s too problematic for them to process. And it’s not the best idea to put animal feces, which contain pathogens, in backyard composters, Ms. Storry says, even though that’s sometimes proposed as a solution.

“Then you’re managing this material that has risks and that maybe gets put on vegetables, or you have people putting dirt in their mouths.”

Metro Vancouver staff have tried a number of experiments already in their parks: doggy sandboxes with a scented pole in the middle to attract the animals to do their business in that area (dogs didn’t like it); in-ground sewer tanks (didn’t work logistically, as it required people to travel to a single tank, when so many regional parks consist of long, winding trails), and worm composting (a lot of work).

They initiated the red-bin idea in their parks, where across the region more than 100 tonnes of dog waste get deposited every year.

Ms. Storry says getting owners to put dog waste in their toilets would be a good idea, except for her fear that too many owners have been tricked into believing those special dog waste plastic bags are compostable. That’s a worry that her fellow Metro Vancouver engineer Linda Parkinson shares.

“My concern would be the rise of those ‘flushable’ bags,” says Ms. Parkinson, who works in the wastewater division, with some frustration. (The situation is of such concern that an environmental group has filed a complaint with Canada’s Competition Bureau about the mislabelling of those kinds of products.)

Ms. Storry says, whatever the solution ultimately is, it has to be easy for owners. “This isn’t their favourite part of dog ownership, so they want a simple solution.”

But Mr. Lee, who works as a longshoreman in Delta, B.C., is more optimistic about dog owners than their tolerance for handling dog poop.

“As owners, we spend time picking it up so we’re used to it. I think you would find more dog owners would do it than not. And we could help out by doing it that way.”

03 Sep 03:00

sqlite-utils 1.11

sqlite-utils 1.11

Amjith Ramanujam contributed an excellent new feature to sqlite-utils, which I've now released as part of version 1.11. Previously you could enable SQLite full-text-search on a table using the .enable_fts() method (or the "sqlite-utils enable-fts" CLI command) but it wouldn't reflect future changes to the table - you had to use populate_fts() any time you inserted new records. Thanks to Amjith you can now pass create_triggers=True (or --create-triggers) to cause sqlite-utils to automatically add triggers that keeps the FTS index up-to-date any time a row is inserted, updated or deleted from the table.

03 Sep 03:00

Terry Gilliam says he disagrees with John Cleese's worldview | Film

mkalus shared this story from The Guardian.

Terry Gilliam has said he disagrees with the way his friend and fellow Monty Python member John Cleese sees the world, following comments from the latter endorsing Brexit and criticising the makeup of London.

The Python animator and Hollywood director despairs of Donald Trump and Brexit, both of which make him “terminally depressed”. Cleese has previously faced a backlash for voicing support for the UK leaving the EU, and for saying London was no longer an English city.

Gilliam told Radio Times that the only public figure he could trust in the current political climate was Sir David Attenborough. He also criticised the political correctness of contemporary comedy, but stopped short of supporting his friend’s view of the world.

He said: “I’m the instinctive, monosyllabic American and he’s the tall, very suave one. I love John enormously but I just disagree with the way he perceives the world.”

Gilliam said his friend, no matter his views, had always been funny. “John has never changed, he’s just got fat, that’s all.”

The director, who achieved critical recognition for his work on Brazil, Time Bandits and 12 Monkeys, does not approve of controls on comedy and has criticised attempts to diversify writing rooms and to police material.

“It doesn’t have anything to do with gender, sex or anything. Good writing is what it’s about, and that’s why you hire people, not because they’re this colour or that gender,” he said. “‘Is it funny?’ is the only thing that should be asked. Comedians are treading carefully and this is terrible. I really want some comedians to really go for it again, but people are frightened of saying the wrong thing, of causing offence.”

Gilliam, who renounced his US citizenship, said that although talents in Monty Python had an Oxbridge background, their material pushed diversity, and attacked the established order.

Radio Times also interviewed Sir Michael Palin, another member of the comedy troupe, who said there was still a “sparkle” left in his friend and fellow Python Terry Jones despite his diagnosis in 2015 with a form of dementia that affected his ability to communicate.

The pair met at Oxford before collaborating together in Monty Python, said there are still traces of Jones left despite the effects of his illness.

Palin said: “He’s still around, he’s not disappeared, quite apart from the wonderful work that he left behind, the work he’s done. There’s still a bit of Terry there, the sparkle in the eye. He can’t communicate, that’s the problem, which is so ironic for someone who loved words and debate and jokes and opinions and ideas.

“There’s enough of Terry there to make me feel grateful that I can still go and see him.”

03 Sep 02:59

Dwelling

by Rui Carmo
A rather characteristic small villa at my hotel, by streetlight.

03 Sep 02:58

Recently

by Tom MacWright

Reading

The most likely explanation for negative interest rates is far simpler. The economy has become a giant kill zone. In venture capital circles, the term “kill zone” has become quite popular to describe the phenomenon of having no places to profitably invest.

How Monopolies Broke the Federal Reserve convincingly connects the dots between the rise of monopolies, negative interest rates, and the rate of innovation. I found it very convincing, and my experience working in tiny companies that compete with huge companies matched up with some of its discussion of the ‘economic kill zone’ that results from highly-resourced players.

But none of those people are the richest person here, which means they will keep succeeding despite—not because of—the man who is. He doesn’t know what they know; he doesn’t have to know. No one like him does.

The Adults in The Room is a remarkable final article by a politics editor at Deadspin that eviscerates the idea of experienced managers who swoop in and monetize media businesses.

It is so rare to find a role-playing game like this. There is no plot, no mystery, no dragons, no romance, no treasure. I still don’t know who I am or where I came from: my amnesia is never resolved. But I know why I am here, and that is enough.

I read Where it is Easy to Do Good on a park bench in Chicago after a lovely but overwhelming conference, and it hit me hard. I was more familiar with Everest’s art and generative fiction, but this short, impacting story reveals a really great writing talent too.

“My good friend Richard Serra is building out of military-grade steel,” he says. “That stuff will all get melted down. Why do I think that? Incans, Olmecs, Aztecs—their finest works of art were all pillaged, razed, broken apart, and their gold was melted down. When they come out here to fuck my ‘City’ sculpture up, they’ll realize it takes more energy to wreck it than it’s worth.”

This article about Michael Heizer’s city-sized new artwork.

I also watched a few things, like The Last Black Man in San Francisco (incredible) and Midsommar (just okay). It really seems like A24 is coming up with hits left and right - their releases are almost the majority of films I’ve seen recently.

Enigma

Right now my Enigma Machine notebook on Observable is making the rounds on the internet, achieving a sort of virality I’ve never enountered before - somewhere north of 8,000 folks have ‘liked’ it on Twitter.

Since it’s just such eye-candy, the feedback has been overwhelmingly positive. There was a little constructive feedback that I implemented to make sure that it’s as historically accurate as possible.

This was a lucky project to work on. Sure, there’s the implementation, which was tricky and required a few interlocking areas of expertise. But anyone else, given enough time, could have created something similar. It’s more that – by knowing Dana, by reading Jason’s book, by noticing that the Enigma was neat looking, by working at a company focused on explaining things, by having the ability to set some 9-5 time apart to work on this project for months on and off – and confirming that nobody had already claimed the task of creating a cool visualization of it - it all sort of lined up, in a way that’s much more luck than skill.

I’ve worked on lucky problems before – the saga of the DC Code was one of them, where a few key factors, where the political and intellectual winds blew in the right direction. And worked on unlucky problems, ones where I spent as much or more time and energy, but they were too early or too late for their time.

The other thing that I should write about this project, before it slips from my mind and I again complain about having accomplished nothing to my patient but irritated friends, is that it took four major revisions. The rotors looked sort of like chord diagrams, then they were skeuomorphic and 3D, and then they were flat and 1-dimensional, and then they were a mix of all those things. The final design was a combination of all of those, plus Tarek Rached’s pithy comment on an earlier prototype.

The final revision doesn’t fit cleanly into any of the classic visualization types that I reach for. Which is something that I usually try to avoid: familiarity is the most surefire path to understandability and usability, and inventing something new is risky. But the final design cleanly represented what this was: a bunch of stacked 1:1 maps that rotate and change over time. Many of the visualizations I admire are simply plain charts done really well, like those at the Economist, but the outside-of-the-box style is best exemplified by the work of Eleanor Lutz, of Tabletop Whale. They have a visual polish and deep creativity that begs for them to be printed and used as posters or wallpaper or full-body tattoos.

03 Sep 02:56

The Busiest and Sleepiest Roads on PEI

by peter@rukavina.net (Peter Rukavina)

My colleague Matthew sent me a link to this compelling GIS application that allows exploration of the provincially-maintained roads on Prince Edward Island by annual average daily traffic:

The Roads of Prince Edward Island, by traffic count, 2018

The busiest road on the Island in 2018 (the latest year for which data is presented) was the Hillsborough Bridge between Charlottetown and Stratford, with 35,053 vehicles per day:

Average Annual Daily Traffic count on the Hillsborough Bridge between Charlottetown and Summerside, 2018

Four roads were tied for least busy, each with an average of 42 vehicles per day.

The first two comprise this stretch of road that includes the Aberdeen Rd., the Mill Rd., and the Mickle Macum Road, near Naufrage:

Route 357

The third is the Gowan Brae Rd. near Souris:

Gowan Brae Road.

A road that sees 42 vehicles a day is seeing under 2 cars an hour, on average. That’s a pretty sleepy road.

As you might expect, the collection of busiest provincial roads on the Island are those that run into and around Charlottetown: each of the roads highlighted in yellow on this map had a daily average of more than 12,000 vehicles:

The busy roads around Charlottetown

Because the application supports exporting all of the data about the roads and the traffic counts to GeoJSON, it’s possible to cook up your own visualizations too. Here are the roads in QGIS, colour-coded by traffic count (the redder the line, the more traffic):

Visualizing PEI's provincial roads by traffic count

This visualization looks like a circulation system of the body. And, of course, that’s exactly what it is.

Kudos to the Department of Transportation, Infrastructure and Energy for releasing this data, and developing such a useful tool for exploring it.

03 Sep 02:56

The Best Workshops

by Richard Millington

The best workshops are transformative.

They leave people feeling smarter and more confident than before.

They equip participants with new information.

They challenge participants to use this information to solve engaging problems.

They leave participants with a peer group they can call upon for support afterwards.

You can have workshops for your top members, newcomers, or attendees of your conference. When done well they are powerful, and woefully underused, tools.

p.s. 5 tickets remaining for my 80-person workshop at CMX Summit this week.

03 Sep 02:55

What is a total function?

by Eric Normand

Total functions are functions that give you a valid return value for every combination of valid arguments. They never throw errors and they don’t require checks of the return value. Total functions help you write more robust code and simpler code, free from defensive checks and errors.

Video Thumbnail
What is a total function?

Total functions are functions that give you a valid return value for every combination of valid arguments. They never throw errors and they don't require checks of the return value. Total functions help you write more robust code and simpler code, free from defensive checks and errors. https://share

Transcript

Eric Normand: What is a total function? By the end of this episode, you will know how this idea helps you write more robust systems.

Hi, my name is Eric Normand and I help people thrive with functional programming.

This is another important idea for math that we can borrow in our programs. It’s used often in functional programming. We’re not talking about totals functions. We’re talking about mathematical, pure functions. These are calculations, they don’t have side effects.

However, it does apply to actions, to side effects, if you want. You can take this concept, it’s a flexible idea, and apply it to those things. We’ll just be talking about it for pure functions, mathematical functions, in this episode.

All right, so let’s get started. What is a total function? Quite simply, a total function has an answer for every combination of valid arguments.

That kind of asks a new question — what is a valid argument? We’re going to say that in type languages it is very clear. If you’ve got static-type…

Eric: …arguments to a function are all values that belong to the argument types.

If it says int — whatever that means, in whatever language you use — then every possible int, if it’s an int, then it’s a valid argument to that function. If it say string, that means every possible string. Any way you can make a string, then that’s a valid argument.

For untyped languages, it’s a little bit more complicated. Usually when we’re programming in untyped languages, the types are in our heads, and we have some unwritten, implicit, informal idea of what we mean as valid arguments.

That’s what I mean, these informal rules about what are the valid arguments. That’s what you’ve got in an untyped language.

If you can make them more explicit, if you can have some kind of checking on your arguments, some kind of preconditions, contracts, some kind of assertions, whatever you want to put in there to claim this is what’s valid, all those things will be helpful for making this more useful.

That said, I myself, for my own code, when it’s just me, I kind of know what the valid arguments are. I tend not to use too many assertions and things. Sometimes, I do. They’re helpful sometimes. I just want to put that out there. We’ll discuss this a little bit more, later in the episode. It’s a hairy discussion.

Total function has an opposite. Just to clarify again, just to repeat, for every valid argument. If it says it takes ints, I can pass any int and get an answer. That’s a total function. The opposite is a partial function. Maybe one of those ints doesn’t work.

A classic example is division. They type of division, let’s say, it takes two numbers and returns a number. Divides the two numbers and then returns a number, but you can’t divide by zero, so it throws an error or it does something. Returns some infinity, or not a number, or something like that. Returns nil, null, whatever.

All those things are kind of like the function line. The function said I take two integers and I return an integer, but it didn’t. It returned something else or threw an exception. It didn’t even return. It broke when you passed a zero for the second argument, for the…

Eric: We can see how this creates problems. First of all, robustness. If you have a divide by zero error in your program, it’s going to fail in production and it’s not going to be as robust. It’s is not correct.

If you do want to make it correct, what do you do? You put an if statement before you divide, just to check. That makes more conditionals.

This is what the type system is supposed to be handling for us. We don’t want them check that the arguments are valid. We want to just run division, and we want the compiler to find these problems. This is in a typed language.

Another example of people doing this — I see this occasionally and it really bugs me, because I really try to do total functions as much as possible — let’s say you have a person class, and there are two subclasses. There’s employee, and there’s volunteer. Employees get a salary. Volunteers don’t.

On person, there is a method called salary. That means you can call salary on employees and volunteers. When you call salary on employee, it gives you a number. It’s like $10,000 a year or whatever. Then on volunteer, it throws an exception. The message says something like, “Volunteers don’t get paid.” That’s the friendly error message.

To make this total, you would need the salary method to return a number in every single possible case, instead of throwing an exception. That means that the volunteers’ salary should return zero. That’s the mathematical definition. I know there might be some reason why you didn’t return zero there.

You threw an exception, because you’re probably doing something wrong somewhere else in your code, and you want to signal, “Hey, we’re trying to get the salary of some volunteer. That’s probably an error.” That means like, “We don’t know if they’re a volunteer or an employee. We’ve lost track.”

You know what? I think that you’re modeling it wrong. If only employees have a salary, the salary method shouldn’t be on person, it should be on employee and have your compiler check that.

What a subclass is supposed to mean is that you can forget what the subclass is. You just care about the main class that it’s all subclassing from, the superclass.

If you don’t know that it’s a volunteer, that’s supposed to be OK. Just saying that this is an issue that can come up, even in something like an object-oriented model, where you have different methods that you think should be called. Maybe it’s an indication that your method is in the wrong place.

That’s one way to handle it, is to change the definition of what valid means for the arguments like we did with this person. It’s not valid to call salary on just any person. It has to be an employee.

Another way that you can handle this is just make sure that you handle all the cases. Make volunteers’ salary method return zero. That’s it, just make it happen. Don’t use nils and exceptions. Find a value that makes sense.

Zero happens to work in this case because their salary actually is zero. Sometimes, people will return a null instead of an empty array when there’s no answers.

It normally returns an array of something, an array of answers. If there’s no answer, it returns null. Why don’t you return the empty array? In that case, that’s still correct. Now, that function is total, and it works in a wider range of cases.

You don’t have to check the return value. You can call length on it and know that there’s zero things in there. Likewise, you can replace all those exceptions, trying to find some value that works within the already assigned return type.

That’s not always possible. Sometimes, you’re going to have to augment the return type. I guess that’s number three — augment the return type. What does this mean? You could make the empty case explicit. Change your type so that it has an empty case.

This is in an untyped language where you’re dynamically typed. You’re a lot more flexible. Make nil explicitly accepted. I do that in Clojure. I say it’s either a number or a nil. This is what numbers mean, and this is what nil means.

I don’t want to overload nil with too much meaning. In this specific case, this specific type that I’m returning, number means this. Nil means this. I don’t do that all the time, but sometimes I do.

Now, if I’m returning something like a variant, a tagged union, I might add a case to that variant. That’s like no answer. You could use a maybe or option type, instead of using null. If you’re going to throw an exception, change the return value to a Maybe Int.

If you had division, you could make division total by saying, “Division doesn’t return a number. Division returns a maybe number.” If you could pass zero in for the denominator, I’m going to return nothing. You’re going to have to check that.

Optionally, if there’s errors and things, you can make those part of the type. In Haskell, you would use an either, but somehow fold the errors into it. The way I imagine this is if you have a HTTP client, usually you can have it configured where it never throws an exception if the HTTP request doesn’t go through. You can have it actually return the error as a response.

If the client got in touch with the server and the server had an error like a 404 or a 500, that is a valid response. It’s an error, but it’s a valid response. The error modes have been baked into the response type. There’s codes, status messages, and things like that.

Even some clients — the good ones — will turn a time-out, which means they don’t even talk to the server sometimes. It’ll turn those into the same kind of error type.

The status 700, which isn’t a real HTTP type of status type, but we’ll call it status 700 time-out. Something like that. Then you can handle that in the same way with the same kinds of logic as you would handle even a valid response, a correct response.

The last way to handle these partial functions and make them total is to augment the argument type. In our case of division, instead of saying it takes two integers or two numbers and returns a number, we say it takes a number and a number that’s not zero. That’s a type — number that’s not zero — and it returns a number.

Now, we have to make this new type called number that’s not zero. Somehow, somewhere else, we will make one of those. When we make it, we check to make sure it’s not zero.

Now, this function knows that it trusts this type. It’s not going to be zero. It can unbox it and divide. It would call whatever internal division that’s unsafe partial function inside.

Here’s a warning. These last two things where you’re augmenting the return type and augmenting argument type, they still need to check somewhere. They’re just deferring the if statements. Before, you were doing unsafe division. You weren’t checking if it’s zero. You’re going to have to catch that error.

That’s like a conditional. Somewhere it’s like, “Oh, if it throws an error, do this. Otherwise, you get an answer, do this.” It’s a branch. You have a branch. Now, if you turn it into a maybe — the return value becomes a maybe number — you still have to branch. You still have to say, “Is it a nothing or is it a just number?” You still have to do that branch.

It’s not reducing the complexity, really. It’s deferring it somewhere else. You still have all these other ways of dealing with maybe. For example, it’s a functor. It’s a monad. You can use it in other ways. It fits in with what you’re doing elsewhere in a nice way maybe. It’s a little bit better.

Likewise, if you augment the argument type, you still have to make one of these things somewhere else. You probably don’t have one when you need it. You have a number, and you need to make a number that’s not zero, however how you make one of those things.

When you make one of those, something’s got to check if it’s zero. What if it’s zero? It’s going to return a nothing. There’s a maybe on the other side, too. You still have the branch. It’s still in there. It’s just now you have the type system on your side.

You can push all these if statements out to the edges. That’s the metaphor that people like to use — that there’s edges. When input comes in, the user types something in, check it for all these things — the zeros, etc.

Now, in the happy path where it’s not zero and you pass it through, there’s no checks. It’s all good. When it comes out, you got an answer. You’re pushing it to the edges. All the way, as close as possible to where the user input comes in or to where the user output goes out. Those are called the edges.

You’re deferring it, pushing it out, so you’re able to create a space in the middle where your functions are all total, where everybody’s happy. Where you don’t have to check things before you call methods on them. You don’t have to worry about error cases, etc.

Of course, this is a design decision. It is up to you to decide how far you want to push stuff to the edges. Whether it’s worth it in particular cases, etc. A lot of people push it really far to the extreme.

Some people don’t mind checking if a number is zero right before they divide. Some people don’t even check and they just let it throw there, and say, “Well, we’re just going to have errors.” That’s fine. It depends on your software and what you need to do with it.

I do want to come back to this idea that types do help with totality. I’m going to use Haskell as an example. Haskell has several built-in functions that are partial. I talked about division. It is partial in the base Haskell language. Also, head, which gives you the first element of a list. If the list is empty, that throws an error. That’s partial, this is the built-in function called head.

In general, Haskellers, they either look at that as a mistake or a wart on the language because they want total functions. They might rewrite head to turn it into like it returns a maybe. Something like that. They might call it maybe head or something like that.

Type systems do help in a number of ways.

One is they give you this way of defining what is valid so that the compiler checks it. We talked about how difficult it was in a dynamically typed language to even talk about what valid arguments means. Type system gives a definition of what a valid argument is.

It also will, in many cases, be able to check whether you’ve handled every case. If you have what’s called a tagged union, or a variant, or you have multiple cases, different constructors for the same type, there’s a flag you can set on the compiler that says, “Make it an error if I don’t handle one of those cases.”

That’s the compiler being on your side, helping you make total functions.

Types can help with these total functions. The language can help as well, for instance in Haskell. Another design decision of the Haskell has nothing to do with the type system, but there’s no null pointers in Haskell. That’s really nice.

Null pointers were a mistake. [laughs] The inventor of null pointers admitted this. They didn’t make the same mistake in Haskell. They learned from that, so there’s no null pointers.

Nulls, especially in languages like JavaScript or Java where you have a type system…Like in Java you have a type system, but you can always put a null basically anywhere. The type system won’t help you with that at all. That sucks.

As far as dynamically-typed languages go, I do want to say that this is one of those sore spots in dynamic languages where there’s one way to look at it, which is that every function accepts any kind of argument.

It’s true. [laughs] In a certain way of looking at it, I can go into my Clojure REPL and type plus, string and null. It will try to add them and it’ll say, “Ah, these aren’t numbers. Bleh.”

It’s true. I could run it. I did run it. I got an exception, so in some way every function is partial because you can always find some value of some type that wasn’t expected that will throw an exception. It’s true.

Except, that most…This is the same of object-oriented languages that are dynamic, you can send any message. There’s no checks on what messages you’re allowed to send. Very often, you’ll get something…in JavaScript, you get undefined is not a function because it couldn’t find that method in the object. Or you’ll get method not found or whatever it’s called in your language.

That’s another ding where it says, “You can pass any message to anything.” Everything’s partial. Every method is partial because you can find some object that doesn’t answer that message. Yes.

But there’s another way to look at it which is that there is an intended type for every argument. Sometimes that type is really complicated, and sometimes it’s not specified. Sometimes, it is not explicit, but there is an intended type for every argument.

Does that mean that they’re all total? No, not saying that either. It’s a lot more total [laughs] than with the other perspective. These are things we’re used to dealing with in dynamic programming, dynamic-type languages. We’re used to thinking the compiler’s not going to help me. I got to remember the types.

Well, that’s all I want to say. That is the extent that we have a good definition of valid in dynamically typed languages. There is an intended type. It doesn’t mean it works everything.

For instance, I think in JavaScript, in Clojure, divide still takes two arguments, two numbers that still the intended type, and it sometimes is an error. It’s still undefined for divide by zero. It’s just the way it is. It’s the same in Haskell.

Yeah, all right. Let me recap. It’s been total functions. It’s very useful for robustness if your functions are total. You have to do fewer checks. You’re not going to get the errors.

If you’re using only total functions, no errors, none. It’s not possible. Maybe you get something else. Something like out of memory or stack overflow, something like that, but you’re not going to get error because you chose the wrong arguments.

Total means you have an answer for every combination of valid arguments. The opposite is a partial function. Usually it means it throws an exception, or it returns null or nil, or some other error thing it does.

Why? It gives you robustness. It also improves your simplicity. You just have fewer conditionals within your center, inside the edges. A lot of times you’re just pushing conditionals to the edge, but inside you’ve got this nice total, no errors, no conditionals. Everything works right inside.

I went over four ways that you can increase the totality of your function. The first one was that you could just move stuff around because things are on the wrong types, or classes, or you just designed it poorly, not using the constraints of the language properly.

The second one was sometimes you just mistakenly forget to handle a case. You thought it should be an exception, but really, within the stated type or the intended type, there is a valid response for that. Salary on a volunteer should be zero or it shouldn’t even be on there.

The third one is you can augment the return type. If it turns out that there isn’t a good response, you’ve got to change the type. You’ve got to make it a maybe. Maybe you can add cases to your type so that you’re explicitly calling out the empty cases.

You could fold the error conditions, the failure modes, into the type, just like HTTP does. It uses the same response format for 200 responses as it does for 400 responses. Still got headers, still got a status code, still got a body. It’s all the same.

The fourth one is to augment the argument type. You could say this is a number that’s not zero. It’s somehow somewhere else you’re making sure that those are getting created correctly, but this function doesn’t have to worry about that and that’s nice.

When you do this, you can push these conditionals, checks, and things out to the edges. If you’re checking stuff from the arguments it’s going closer to user input and you’re pushing the return type checks further out into where it’s getting output.

If you enjoyed this episode, if you learned something, there are a lot of other episodes. You can find all the past ones at lispcast.com/podcast. All the past episodes have audio, video, and text transcripts. Whatever medium you’re into, we’ve got it.

You’ll find links to subscribe, including the video, the RSS for the text, also the RSS or iTunes link for the podcast audio. You can also find social media links at that same address lispcast.com/podcast.

Please get in touch. I love discussing this stuff with people that’s why I’m doing it, broadcasting, trying to find people who I can relate to out there. It’s a lonely world. Got to find people that like the same stuff you like, have a lot in common.

My name is Eric Normand. This has been my thought on functional programming. Thank you for listening and rock on.

The post What is a total function? appeared first on LispCast.

03 Sep 02:51

UE WonderBoom 2 Review: The perfect portable speaker

by Brad Bennett

Over the last few years, Logitech-owned Ultimate Ears has firmly cemented itself as one of the top portable Bluetooth speaker companies. As expected, the UE WonderBoom 2 carries that legacy forward.

Throughout the summer, the WonderBoom 2 has been my go-to cottage speaker. Its compact size, passable sound, battery life and waterproofing has made it the perfect speaker for me — and likely you as well depending on what you’re looking for from a portable speaker.

While it may not sound as good or last quite as long as the ultra-popular UE Boom 3, it makes up for those shortcomings with its ultra-portable size and reduced price.

Even though this speaker isn’t for audiophiles, the value it offers is impressive.

Same look, but that’s not bad

The WonderBoom 2 looks almost exactly like its predecessor, but with two new buttons: one on the top and one on the bottom. The speaker’s volume buttons are slightly larger as well.

The button on the top of the speaker borrowed from last year’s UE Boom 3 is called the ‘Magic Button.’ It’s a fairly simple button that acts similarly to what is sometimes featured on headphone cords. This means that you press it once to play/pause and twice quickly to skip to the next song.

The button on the bottom — the one in the shape of a tiny pine tree — triggers the ‘Outdoor Boost Mode.’ This feature makes songs sound louder and crisper but also drops the bass level a bit. Either way, this button is crucial for listening to music outside in wide-open spaces.

The other two buttons on the top are for power and syncing the speaker. Beyond the new buttons and the features they bring, the WonderBoom 2 also looks awesome.

It’s about the size of a softball and weighs 420g.

The speaker is waterproof and has an IP67 water and dust resistance rating. That means that it’s able to be submerged in a metre of water for about half an hour. While I haven’t left the speaker in the water for that long, I’ve gone swimming with it and it’s also sat outside overnight. The Wonderboom 2 took both tests like a champ. It’s also worth noting that the speaker floats.

Overall, I think the speaker features a great design that’s both stylish, portable and includes interesting new features.

Hey, Listen!

Regarding audio quality, the speaker’s sound isn’t as clear as the larger UE Boom 3, but it’s able to turn up the volume loud enough when you need to. The Outdoor Boost mode also significantly helps give it an extra kick in some scenarios.

While UE markets the speaker as perfect for the great outdoors, I also found that it excels inside.

Playing music while I work at my desk or when I’m in the shower is great since the little speaker can easily fill a room with sound. While the sound stage isn’t perfectly balanced like a Sonos or other larger, more expensive smart speakers, it offers excellent audio for its price tag.

Inside, the UE WonderBoom 2 is packing two 40mm active drivers and two 46.1mm x 65.2mm passive radiators.

These are slightly smaller than the Boom 3’s speakers, and you can tell when you’re listening to both speakers side-by-side, but when isolated, they both sound great.

What else is packed into this thing?

Beyond the two new buttons and the sound quality, the speaker also has a few hidden tricks.

For example, you can pair two WonderBooms together. If you have two speakers near each other, then you need to press and hold the Magic button until you hear the pairing sound to play music in stereo.

UE has also included a button combo for checking the speaker’s battery level. Holding both volume buttons will play one of three sounds that indicate the speaker’s battery level.

You can also pair two phones or other devices to the speaker at the same time so you and a friend can take turns playing songs.

What is a bit odd is that the WonderBoom 2 doesn’t connect to the same app as the UE Boom 3. While I admittedly rarely use the app with the Boom 3, the extra features it offers make it worthwhile in some cases.

UE says the WonderBoom 2 features 13-hour battery life and in my experience, that number holds up. I’ve never had it playing for 13 hours straight, but the WonderBoom 2 has easily made it through a weekend at a cottage without needing to be charged. Just like the UE Boom 3, the company once again isn’t including a charging brick in the box, which is annoying and a strange decision on UE’s part.

The post UE WonderBoom 2 Review: The perfect portable speaker appeared first on MobileSyrup.

03 Sep 02:50

Map shows long-term record of fires around the world

by Nathan Yau

For the NASA Earth Observatory, Adam Voiland describes about two decades of fires:

The animation above shows the locations of actively burning fires on a monthly basis for nearly two decades. The maps are based on observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. The colors are based on a count of the number (not size) of fires observed within a 1,000-square-kilometer area. White pixels show the high end of the count—as many as 30 fires in a 1,000-square-kilometer area per day. Orange pixels show as many as 10 fires, while red areas show as few as 1 fire per day.

There are a lot of fires, but a bit surprising given the news lately, the total area burned each year is decreasing.

Tags: fire, NASA

03 Sep 02:50

APS-C Gets Some Love (but not marketing)

Just before my month-long break I caught up with the mirrorless market by putting out three reviews of various Fujifilm APS-C cameras (X-T30, X-T3, X-H1). When I posted those and ran off into the bit-less wilds for a much needed break, I knew it wouldn't be long before the APS-C wars began heating up.

03 Sep 02:50

Anonymity doesn’t explain violence, white supremacy does

by Abigail Curlew

fff-anonymous-10071137-1440-900

In the wake of the terrifying violence that shook El Paso and Dayton, there have been a lot of questions around the role of the Internet in facilitating communities of hate and the radicalization of angry white men. Digital affordances like anonymity and pseudonymity are especially suspect for their alleged ability to provide cover for far-right extremist communities. These connections seem to be crystal clear. For one, 8chan, an anonymous image board, has been the host of several far-right manifestos posted on its feeds preceding mass shootings. And Kiwi Farms, a forum board populated with trolls and stalkers who spend their days monitoring and harassing women, has been keeping a record of mass killings and became infamous after its administrator “Null”, Joshua Conner Moon, refused to take down the Christchurch manifesto.

The KF community claim to merely be archiving mass shootings, however, it’s clear that the racist and misogynistic politics on the forum board are closely aligned with that of the shooters. The Christchurch extremist had alleged membership to the KF community and had posted white supremacist content on the forum. New Zealand authorities requested access to their data to assist in their investigation and were promptly refused. Afterwards, Null encouraged Kiwi users to use anonymizing tools and purged the website’s data. It is becoming increasingly clear that these far-right communities are radicalizing white men to commit atrocities, even if such radicalization is only a tacit consequence of constant streams of racist and sexist vitriol.

With the existence of sites like 8chan and Kiwi Farms, it becomes exceedingly easy to blame digital technology as a root cause of mass violence. Following the recent shootings, the Trump administration attempted to pin the root of the US violence crisis on, among other things, video games. And though this might seem like a convincing explanation of mass violence on the surface, as angry white men are known to spend time playing violent video games like Fortnite, there has yet to be much conclusive or convincing empirical accounts that causally link videogames to acts of violence.

One pattern has been crystal clear, and that’s that mass and targeted violence seem to coalesce around white supremacists and nationalists. In fact, as FBI director Christopher Wray told the US Congress, most instances of domestic terrorism come from white supremacists. From this perspective, it’s easy to see how technological explanations are a bait and switch that try to hide white supremacy behind a smoke screen. This is a convenient strategy for Trump, as his constant streams of racism have legitimized a renewed rise in white supremacy and far-right politics across the US.

For those of us who do research on social media and trolling, one thing is for certain, easy technological solutions risk arbitrary punitive responses that don’t address the root of the issue. Blaming the growing violence crisis on technology will only lead to an increase in censorship and surveillance and intensify the growing chill of fear in the age of social media.

To better understand this issue, the fraught story of the anonymous social media platform Yik Yak is quite instructive. As a mainstream platform, Yik Yak was used widely across North American university and college campuses. Yak users were able to communicate anonymously on a series of GPS determined local news feeds where they could upvote and downvote content and engage in nameless conversations under random images to delineate users from each other.

Tragically, Yik Yak was plagued by the presence of vitriolic and toxic users who engaged in forms of bullying, harassment, and racist or sexist violence. This included more extreme threats, such as bomb threats, threats of gun violence, and threats of racist lynching. The seemingly endless stream of vitriol prompted an enormous amount of negative public attention that had alarming consequences for Yik Yak. After being removed from the top charts of the Google Play Store for allegedly fostering a hostile climate on the platform, Yik Yak administrators acted to remove the anonymity feature and impose user handles on its users in order to instil a sense of user accountability. Though this move was effective in dampening the degree of toxic and violent behavior on Yik Yak’s feeds, it also led to users abandoning the platform and the company eventually collapsing.

Though anonymity is often associated with facilitating violence, the ability to be anonymous on the Internet does not directly give rise to violent digital communities or acts of IRL (“In-real-life”) violence. In my ethnographic research on Yik Yak in Kingston, Ontario, I found that despite intense presence of vitriolic content, there was also a diverse range of users who engaged in forms of entertainment, leisure, and caretaking. And though it may be clear that anonymity affords users the ability to engage in undisciplined or vitriolic behavior, the Yik Yak platform, much like other digital and corporeal publics, allowed users to engage in creative and empowering forms of communication that otherwise wouldn’t exist.

For instance, there was a contingent of users who were able to communicate their mental health issues and secret everyday ruminations. Users in crisis would post calls for help that were often met with other users interested in providing some form of caretaking, deep and helpful conversations, and the sharing of crucial resources. Other users expressed that they were able to be themselves without the worrisome consequences of discrimination that entails being LGBTQ or a person of color.

What was clear to me was that there was an abundance of forms of human interaction that would never flourish on social media platforms where you are forced to identify under your legal name. Anonymity has a crucial place in a culture that has become accustomed to constant surveillance from corporations, government institutions, and family and peers. Merely removing the ability to interact anonymously on a social media platform doesn’t actually address the underlying explanation for violent behavior. But it does discard a form of communication that has increasingly important social utility.

In her multiyear ethnography on trolling practices in the US, researcher Whitney Phillips concluded that violent online communities largely exist because mainstream media and culture enable them. Pointing to the increasingly sensationalist news media and the vitriolic battlefield of electoral politics, Phillips asserts that acts of vitriolic trolling borrow the same cultural material used in the mainstream, explaining, “the difference is that trolling is condemned, while ostensibly ‘normal’ behaviors are accepted as given, if not actively celebrated.” In other words, removing the affordances of anonymity on the Internet will not stave off the intensification of mass violence in our society. We need to address the cultural foundations of white supremacy itself.

As Trump belches out a consistent stream of racist hatred and the alt-right continue to find footing in electoral politics and the imaginations of the citizenry, communities of hatred on the Internet will continue to expand and inspire future instances of IRL violence. We need to look beyond technological solutions, censorship, and surveillance and begin addressing how we might face-off against white supremacy and the rise of the far-right.

 

Abigail Curlew is a doctoral researcher and Trudeau Scholar at Carleton University. She works with digital ethnography to study how anti-transgender far-right vigilantes doxx and harass politically involved trans women. Her bylines can be found in Vice Canada, the Conversation and Briarpatch Magazine.

 

https://medium.com/@abigail.curlew

Twitter: @Curlew_A

 

Headline image via: Source

03 Sep 02:50

Phones as Hotel Room Key Revisited

by Ton Zijlstra

Just a month ago I wrote here about my reservations concerning the use of mobile phones as hotel room key. A hotel I will be staying at in the near future yesterday started sending me multiple (unasked) SMS’s to download their hotel app to ‘make my stay smarter’. Sure, I will trust download links in unrequested SMS! Today as I’ve ignored their messages I received an e-mail imploring me to do the same.

The app they ask me to use is called Aeroguest, and their pitch to me is about easier check-in/out, using chat to contact staff, and using my phone as door key. The first two I’d rather do in person, and the last one is not a good idea as explained in the above link.

Why such an app might be seen as attractive to the hotel, becomes clear if you look at the specifications of the app. A clear benefit is direct repeat bookings, saving the expensive middle men that booking sites are. In my case I almost always book through the hotel’s website directly. And if I enjoyed my stay I usually book the same hotel in a city for my next visit. I do use booking sites to find hotels. In this case I’ve stayed in this hotel several times before.

The stated benefits for the guest (key, chat, check-in/out, choosing your room) are a small part of the listed benefits for hotels in using the app, such as up-selling you packages before and during your stay. An ominous one, when seen from the guest’s perspective, is ‘third party services’ access presumably meaning potential access to your booking / stay history and maybe even payment / settlement information, requested preferences etc. Another, more alarming one, is “advanced indoor mapping” which I take means tracking of guests through the hotel which can yield information on time spent in hotel facilities, time spent in the room, how often / when the key was used, and by matching it with other guests, whom you might be meeting with that is also staying in the hotel. In Newspeak on the apps website in the data and analytics section this is described as “With transparency, you can proactively accommodate your guests’ needs.” Note that the guest is the one who is being made transparant. That is quite a price in exchange for being able to choose your specific room when checking in with the app.

I’ve replied to the hotel my reasons for not wishing to use the app (linking to my previous blogpost), and told them I look forward to checking in at reception in person when I arrive. When I arrive I am curious to hear more about their usage of the app. For now “making my stay smart” reads like the “smart cities” visions of old, it may be smart, but not for the individuals involved, merely for the service provider.

03 Sep 02:50

Downtown from above

by ChangingCity

Only 17 years separate these two oblique angled shots of the Downtown peninsula. Since our 2002 image was taken, over 26,000 residential units have been added Downtown and in the West End. That’s around 140 additional buildings of 10 or more storeys. Thousands more units are under construction and in the development stream, and even then the peninsula is by no means ‘built out’ – although sites are fewer, and harder to find.

There’s still a gap on the far right, on the waterfront, where the Plaza of Nations, and further Concord Pacific sites have yet to be built. There are a number of sites reserved for non-market housing inland behind and between the condo towers built by Concord on the former Expo lands, and a recent deal should see over half developed as non-market, with others returned to Concord for more market development.

On the left of the image Vancouver House is nearly complete, (so Trish Jewison, who photographed the 2019 shot from the Global BC News helicopter took the picture recently). From this angle the twisting taper of the building is almost invisible. In the middle of Downtown, the Wall Centre’s upper floors were reclad almost as dark as the bottom, so the distinctive two-tone effect in 2002 has been lost. From this distance the Empire Landmark wasn’t so obvious in 2002, but in 2019 it’s gone, and the replacement condo towers will be shorter. The Shangri La and Trump Hotel and condo towers almost line up from this angle, so only one tall tower appears in the distance.

Over on the right, the BC Place stadium has its new(ish) retractable roof, surrounded by new towers, with the distinctive rust red of the Woodwards Tower behind. The original ‘W’ was still in place in 2002 – now it’s down on the ground, and a replacement revolves in its place. Not too many new office towers have been added to the Central Business District, but that’s changing. Ten office buildings are currently being built, the most office space ever added to the city at one time, and much of it already leased. The biggest building is the Post Office, getting a pair of office towers added on top, with the huge building (that fills an entire city block) changing to office and retail space.

0900

03 Sep 02:49

The 80 IQ point move: knowledge work as craft

by Jim

I’ve long been a fan of Alan Kay. We met twenty five years ago as we were building a consulting firm that blended strategic and technology insight. One of Alan’s favorite observations is “point of view is worth 80 IQ points.” Choosing a better vantage point on tough problems is time well spent, especially when there is pressure to get on with it.

I’m not sure I can count the number of times I’ve heard or said that we live in a knowledge economy. That we are all knowledge workers who live and work in learning organizations. Yet, we continue to celebrate the industrial revolution in those organizations. We celebrate scale and growth and control. We worry about the problems of accelerating change but assume that working harder and longer will suffice to keep pace.

There is a better vantage point. It is to treat knowledge work as craft work in a technological matrix. Craft work integrates materials, tools, and practices to create artifacts that simultaneously embody the skill and expertise of crafters and meet the practical and esthetic needs of patrons.

Examining each of those elements from a craft perspective illuminates what it takes to become effective as a knowledge worker and remain so as change continues to accumulate. It’s our 80 IQ point move.

Materials – make them visible to make them manageable

Industrial work is built on repeatability; my iPhone 6 Plus is fundamentally identical to yours; any differences are cosmetic. Give me the same consulting report you prepared for your last client and we have a problem. The output of knowledge work derives value by being unique.

Knowledge work produces highly refined abstractions; a financial analysis, a project plan, a consulting report, a manuscript, or an article. A piece of knowledge work evolves from germ of an idea through multiple, intermediate representations and false starts to finished product. Today, that evolution occurs as a series of morphing digital representations which are difficult to observe and, therefore, difficult to manage and control.

A pre-digital counterexample reveals the unexpected challenges of digital work. I started consulting before the advent of the PC. When you had a presentation to prepare for a client, you began with a pad of paper and a pencil and sketched a set of slides. Erasures and cross outs and arrows made it evident you were working with a draft.

This might be two weeks before the deadline. You took that draft to Evelyn in the graphics department on the eighth floor. After she yelled at you for how little lead time you had given her, she handed your messy and marginally legible draft to one of the commercial artists in her group. They spent several days hand-lettering your draft and building the graphs and charts. They sent you a copy of their work, not being foolish enough to share their originals.

Then the process of correcting and amending the presentation followed. Copies circulated and were marked up by the manager and partner on the project. The graphics department prepared a final version. Finally, the client got to see it and you hoped you’d gotten it right.

Throughout this process, the work was visible. Junior members of the team could learn as the process unfolded and the final product evolved. You, as a consultant, could see how different editors and commentators reacted to different parts of the product.

Today’s digital tools make the journey from idea to finished product easier in many respects. When knowledge artifacts are digital, however, they are hard to see as they develop.

So what? Only the final product matters, right? What possible value is there to the intermediate versions or the component elements? Let’s return to the bygone world of paper again. Malcolm Gladwell offers an interesting observation in “The Social Life of Information:

”But why do we pile documents instead of filing them? Because piles represent the process of active, ongoing thinking. The psychologist Alison Kidd, whose research Sellen and Harper refer to extensively, argues that “knowledge workers” use the physical space of the desktop to hold “ideas which they cannot yet categorize or even decide how they might use.” The messy desk is not necessarily a sign of disorganization. It may be a sign of complexity: those who deal with many unresolved ideas simultaneously cannot sort and file the papers on their desks, because they haven’t yet sorted and filed the ideas in their head. Kidd writes that many of the people she talked to use the papers on their desks as contextual cues to “recover a complex set of threads without difficulty and delay” when they come in on a Monday morning, or after their work has been interrupted by a phone call. What we see when we look at the piles on our desks is, in a sense, the contents of our brains.”

I have friends whose digital desktops have that look about them but this strategy doesn’t readily translate to the digital realm. The physicality of paper gave us version control and audit trails as a free byproduct.

Digital tools promote a focus on final product and divert attention from the work that goes into developing that product. “Track changes” and digital Post-It notes provide inadequate support to the process that proceeds the product. Project teams employ crude naming practices in lieu of substantive version control. Software developers and some research academics have given thought to the problems of how to manage the materials that go into digital knowledge artifacts. Average knowledge workers have yet to do the same.

Visibility is the starting point. Once you make the work observable, you can make it improvable. Concepts like working papers, and audit trails, and personal knowledge management can then come into play.

Tools – Every Day Carry and Well-Equipped Digital Workshops

Where craft matters, so do tools. That got lost in the industrial revolution. Tools were carved out and attached to minute pieces of process, not to the people who wielded them with skill. Meanwhile, the raw materials of knowledge work–words, numbers, and images–did not call for much in the way of tools other than pencil and paper. Mark Twain was an innovator in adopting the typewriter to improve the quantity and quality of his output. But the mechanical tools for aiding knowledge work came to be seen as beneath the dignity of important people.

There was a time when “computers” were women charged with carrying out the menial tasks of doing the calculations men designed and oversaw. It was not that long ago when executives thought it perfectly sensible to have their email printed out and prepare their responses by hand. These attitudes interfere with our abilities to be fully effective doing knowledge work in a digital world.

There’s the old saw that to a child with a hammer, everything looks like a nail. To a skilled cabinet maker, every problem suggests a matching hammer. A well-equipped workshop might contain dozens of different types of hammers, each suited to working with particular materials or in specific situations.

If our materials are digital, then our skill with digital tools becomes a manageable aspect of our working life.

There’s a useful distinction between basic and specialty tools. A basic tool in hand beats the perfect tool back in the shop or office. I’ve carried a pocket knife since my days as a stage manager in college. Courtesy of the TSA I have to remember to leave it behind when I fly or surrender it to the gods of security theater but every other day it’s in my pocket. There is, in fact, an entire subculture devoted to discussions of what constitutes an appropriate EDC—Every Day Carry—for various occupations and environments.

In the realm of knowledge work, Every Day Carry defaults to an email client, calendar, contact manager, word processor, and spreadsheet.  For most knowledge workers, tool thinking stops here. Other than software engineers and data scientists, few knowledge workers give much thought to their tools or their effective leverage. Organizations ignore the question of whether knowledge workers are proficient with their tools

If you are judged on the quality of the artifacts that you produce, you would do well to worry about your proficiency with tools. If you have control over your technology environment, set aside time to extend your toolset and learn to use it more effectively. Invest time and thought into how to design, organize, and take advantage of a knowledge workshop filled with the tools of your digital trade. Plan for a mix of EDC, heavy duty, and experimental knowledge work tools.

Practices – Design Effective Habits

Process thinking built the industrial economy. To deliver consistent quality product, variation is designed out and all the steps are locked down and controlled. If your goal is to craft unique outputs suitable to unique circumstances, industrial process is your enemy.

Where then are the management leverage points if industrial process is not the answer? McDonalds is not the only way to run  restaurant. In a knowledge work environment, both design and management responsibilities must be more widely distributed and shared. Peter Drucker captured this when he observed that the first question every knowledge worker must ask is “what is the task?”

Answering that question entails understanding the materials and tools available. From there, knowledge workers can design approaches to creating the necessary unique knowledge artifacts. Habits, routines, rituals, and practices replace rigid processes. In a fine restaurant, the day’s fresh ingredients set the menu and the menu guides which preparation and cooking techniques will be called for that evening. Line cooks, sous chefs, and chefs collaborate to create the evening’s dining experience.

The building blocks for constructing suitably unique final products are learned over years of practice and experimentation. They are passed on through observation and apprenticeship. In a volatile knowledge economy, they must also be subject to constant evolution, refinement, and innovation.

Learning as a craft practice

In the pre-industrial craft world, learning could be a simple process. Find a master and apprentice yourself to them. Time would suffice to transfer expertise and skill from master to apprentice.

We do not live in that world.

In an industrial world, learning was focused on fitting people to the work. Open-ended apprenticeship was replaced with narrow training programs to learn the specifics of where humans fit into a larger, engineered, process design.

We do not live in that world.

Integrating a craft point of view with the pace of the technological environment that now exists makes learning a craft practice to master.

We are all permanent apprentices. We are also all permanent masters of our craft. Apprenticeship must become conscious and designed. Mastery will always be temporary. Our understanding of materials, tools, and practices will always be dynamic. Learning and performing will be in constant tension.

The post The 80 IQ point move: knowledge work as craft appeared first on McGee's Musings.

03 Sep 02:47

10 ideas

by Bryan Mathers
10 ideas

It’s a case of the cobbler’s shoes – I rarely find the time to think about how to articulate my own process, but this hot-air balloon really helped me out…

How to create 10 visual ideas from a 1-hour conversation: https://visualthinkery.com/10-ideas/

The post 10 ideas appeared first on Visual Thinkery.

03 Sep 02:47

2019 HondaLink Infotainment Review: Looking better all the time

by Ted Kritsonis

Honda maintains an iterative approach with its HondaLink infotainment system, but the 2019 version takes a few key steps in the right direction.

One thing automakers are slowly getting better at with their respective systems is usability, especially as it relates to aesthetics. The new-look HondaLink follows the trend, featuring a simpler layout that looks better because of improved graphics.

The current system isn’t in every Honda model just yet. I got to test it in a 2019 Accord Touring, but it also comes in the Insight, Pilot, Passport and Odyssey. There are worthwhile points in what Honda did here, and the room for improvement seems within reach.

The basics

When I previously tested HondaLink, the system had undergone a sea of change that included adopting a capacitive touchscreen as well as embracing Apple CarPlay and Android Auto. The user interface is one of the biggest differences this time, though it does go further than that.

One of the key changes was going back to a physical input for volume. When Honda overhauled things in 2015-16, it opted to go with a volume slider that both customers and reviewers maligned as being unintuitive. It’s gone now, replaced with a knob to more easily adjust how loud or low you want to hear. Steering wheel controls are still there.

Many of the foundational pieces remain, and for the most part, the system functions similarly. Some of the new elements, however, do add to the package in positive ways.

Connections and layout

At the centre of it all is the 8-inch display, which seems to have a sharper resolution than the previous one. Part of that may be a credit to the improved UI that feels cleaner and easier to navigate. Icons are larger and sub-menus shorter, making it a little easier to get from point A to B within the system.

Flanking the screen on both sides are physical buttons acting as shortcuts to various software features. I liked it because it negated having to touch the screen all the time, and reduced the number of steps to get to something. I would argue the biggest difference is the volume knob, which Honda had already returned to last year.

Interestingly, Honda didn’t include USB-C ports in the Accord. While some other manufacturers have already embraced the popular port, traditional USB-A reigns here. What’s neat is that both the dash and centre console each have the same active port. Either one can interface with the system to run CarPlay or Android Auto. I tried to see if I could somehow run both at once, but it forces you to choose one.

There are two USB-A ports in the rear-facing passengers there, though they’re only for charging. There is no aux-in jack, staying consistent with what the rest of the industry has been doing. The 12-volt socket is in front next to the dash USB port.

Honda embedded an NFC tag into the dash on the passenger side, and it proved to be a rapid way to pair a phone with the car. It didn’t work with an iPhone, as expected, yet was smooth with Android phones I tested.

I didn’t get a chance to test the in-car Wi-Fi hotspot for lack of a data bucket I could use, but it seems to offer the same thing others do. AT&T handles the connection and data, with the same pricing other cars have. It’s still 20GB for $200 prepaid over 12 months — still the best deal of them all, in my view. Like the others, you can also roam into the United States and not pay extra.

The wireless charging pad under the dash is a little recessed and not the easiest to access because of its proximity to the stick shift. It’s a limited feature in that it’s only standard in the Touring, Sport 2.0 and Touring 2.0 trims.

Voice and texting

Things get a little interesting with Bluetooth. Unfortunately for Android users, Google Assistant doesn’t operate that way — it only works through Android Auto. That’s not the case with Siri, which pipes up when long-pressing the steering wheel’s voice button.

A short press brings up Honda’s own voice assistant, which lives within the car, so offers typically limited depth. That includes calling and texting, the latter of which comes with an unusual twist.

I was surprised to see that I could actually read incoming text messages in full — even while the car was in drive. Other legible incoming text functions I’ve tested in other vehicles often limit it to when the car is in park. I asked Honda Canada to confirm this was, in fact, a feature and not a bug, and was told it was the former, not the latter.

The system can read out incoming texts with a simple tap on the screen when they pop up. The messages app also maintains constant access to messaging, including the ability to respond.

Responses, however, are more of the canned variety, but I did get away with verbalizing basic ones. Siri and Google Assistant are far better at it, yet this is one of the deeper voice integrations I’ve seen in an infotainment system independent of those platforms.

CarPlay and Android Auto

There weren’t many surprises here. With Honda having already supported both platforms going back to 2016, the Accord has included them since at least 2015 in some markets.

The big difference, at least for me, had little to do with Honda, and that was the new-look Android Auto. I had already opted to try it out prior to this test drive, and found it a significant improvement over the original UI. CarPlay will get its own little makeover when the next version of iOS goes live to the public.

Not all the physical buttons on the sides shortcut to anything on either platform. Instead, they leapfrog past them and go to HondaLink. For example, the ‘Home’ button goes to the HondaLink main menu, not the home screen for Apple or Google’s respective platforms. However, the back button seemed to backtrack within those platforms. Skipping or repeating tracks always works too.

Honda will ultimately benefit from what these two platforms will look and function like when they’re both out of beta. Since they live on a phone, the automaker doesn’t have to do anything, and the improved UI only makes HondaLink look better.

HondaLink app

I wasn’t set up with a way to test out the HondaLink app for iOS and Android, which allows for some remote access to the car. The Accord Touring can access a variety of features within the app, including remote start, remote lock/unlock, Find My Car and stolen vehicle tracking and assistance.

Not all these features come free. After an initial trial period of 12 months, subscribing after that ranges from $99 to $369 per year, depending on what package you go with.

I should also note that Honda has a unique section that deals with APK files. There is a way to upload them, but it’s not clear how far this goes when it comes to sideloading apps. Honda says the files are “for internal use to manage the system,” without elaborating further.

The post 2019 HondaLink Infotainment Review: Looking better all the time appeared first on MobileSyrup.

03 Sep 02:45

Minimum Viable Bureaucracy

The world of work has changed. Companies have transitioned from highly structured 9-to-5 clockworks, to always-on controlled chaos engines, partially remote or wholly distributed. Workers are affected too, expected to keep up with the 24/7 schedule of their directors and customers. This is only possible with the many communication and collaboration tools we have at our disposal. I work remotely myself, often across an ocean, and after years of this, I'd like to share some observations and advice.

Mainly, that the use of these tools is often severely flawed. I think it stems from a misconception my generation was brought up on: that technology is an admirable end in itself, rather than merely a means to an end. This attitude was pervasive during the 80s and 90s, when a dash of neon green cyberpunk was enough to be too cool for school. It laid the groundwork for the tireless technological optimism that is now associated with Silicon Valley and its colonies, but which is actually just part of the global zeitgeist.

In this contemporary view, when you have a problem, you get some software, and it fixes it. If it's not yet fixed, you add some more. Need to share documents? Just use Dropbox. Need to collaborate? Just use Google Docs. Need to communicate? Get your own Slack, they're a dime a dozen. But there is a huge cost attached: it doesn't just fragment the work across multiple disconnected spaces, it also severely limits our expressive abilities, shoehorning them into each product and platform's particular workflows and interfaces.

Brazil Movie Poter

The Missing Workplace

The first and most prominent casualty of this is the office itself: we have carelessly dismissed its invisible benefits for the dubious luxury of going to work in our pyjamas as remote workers. This is accelerated by the plague of open plan offices, which resemble cafeterias more than workshops or labs. The result in both cases is the same: employees sequester themselves, behind headphones or physical distance, shut off from the everyday cues that provide ambient legibility to the workplace.

It's not just the water cooler that's missing. Did that meeting go well, or are people leaving with their hands in their hair? Is someone usually the last one to turn off the lights, and do they need help? Is now a good time to talk about that thing, or are they busy putting out 4 fires at once? Did they even get a decent night sleep? Good luck reading any of that off a flakey online status indicator that is multiple timezones away.

Slack status

There are tools to fix this, of course. Just set a custom status! With emoji! Now, instead of just going about your work day like a human, you have to constantly self-monitor and provide timely updates on your activities and mental state. But there's an app for that, don't worry. Everyone turns into their own public relations agent, while expected to actively monitor everyone else's feeds. The solution is more of the problem, and the simple medium of body language is replaced by a somewhat trite and trivially spoofable bark. The only way you will get the real information at a distance is by having a serious conversation about it, which takes time and energy.

Even if you do though, you won't be privy to who else is talking to who, unless you explicitly ask. Innocently peeking in through the meeting room glass makes way for a complete lack of transparency. More so, clients don't even visit, lunches are often eaten alone, and occasional beers on Friday are usually off the table. They're not coming back when your workforce is spread across multiple timezones. This is a fundamentally different workplace, which needs a different approach.

The environment is asynchronous by default, yet people often still try to work in a synchronous way. We continue to try and maintain the personal and professional protocols of face to face interaction, even if they're a terrible fit. If you've ever been pinged with a context-less "hey," waiting for your acknowledgement before telling you what's up, you have experienced this. Your conversation partner has failed to realize they have all the time in the world to converse slowly, glacially even, with care and thought put into every message, which is the opposite of rude in that situation. Because it means you can't decide if it's actually necessary to respond if the timing is inconvenient.

A related example is the in-person "hey, I just sent you an email": they know they'll get a response eventually, but they want one now. By first sending the email, they are able to launder their interruption, passing the bulk of the message asynchronously, while keeping their synchronous message a seemingly trivial nothing. This isn't always bad, if you e.g. summarize some urgent notes immediately and let the email fill out the details, but this is rarely the case.

Write-Only Media

The notifications themselves are also a problem. They feature so prominently, they turn every issue into a priority 1 crisis. If left to accumulate for later they just get in the way, like a desk you can't even clear. The expectation is that you'll immediately want to look at it, and this is why they are so enticing for the sender: a response is practically guaranteed. But any medium that caters more to the writer than the reader should be treated with extreme skepticism [Twitter, 2006].

Instant notifications are an example of a mechanism that produces negative work. Whatever task is being interrupted is not just on pause, you've added an additional cost of context switching away and back that wasn't there before. A more destructive version is the careless Reply to All and its close sibling, the lazy Forward to Y'all. Whatever was said, instead of now 1 person reading it, there will be many. Everyone will now spend time digesting it independently, offering a multitude of uncoordinated replies, each of which will then need to be read, and so on. It can even become iterated negative work, and it scales up quickly.

Any time a manager forwards mails wholesale from the level above, or a rep forwards requests from a 3rd party to the entire team, this is what they are doing, and they should really stop that. Instead, you should make sure everyone mainly mass-sends answers, rather than questions. The purpose of a manager and a rep is to shield one side of a process from the details of the other after all. You do not want unfiltered, unvetted assignments to be mixed in with the highly focused, day to day communication of a well-oiled team. Any such attempt at inter-departmental buck passing should be resisted vigorously as the write-only pollution that it is. That said, specialty tools like issue trackers and revision control can be extremely useful even for non-specialist workers. You just need to make sure each group has their own space to work in, and is taught how to use it well.

Each person in a chain, even within a group, should act like an information optimizer, investigating and summarizing the matter at hand so the next ones don’t have to. Conversational style should be minimized, in favor of bullet points, diagrams and analysis. If you don't do this, you will end up with a company where everyone is constantly overloaded by communication, and yet very little gets resolved.

Ping Me Twice, Shame On You

If you do need to get a bunch of people into a synchronous room, virtual or otherwise, there needs to be a clear agenda and goal ahead of time. There should be concrete takeaways at the end, in the form of notes or assigned tasks. Otherwise, you will have nothing to constrain the discussion, and then several people will have to decide for themselves what to do next with the resulting tangle of ideas. Sometimes you will just have the same meeting again a few weeks later, especially if not everyone attends both. Instead you should aim to differentiate between those who need to attend a meeting versus those who just need to hear the conclusion. Particularly naive is the notion that mere recordings or logs are a sufficient substitute for due diligence here, as it takes a special kind of stupid to think that someone would voluntarily subject themselves to an aimless meeting they can't even participate in, after the fact.

This means optimizing for people-space, ensuring that the minimum amount of people are directly involved, as well as people-time, ensuring the least amount of manhours are spent. This also works on the long scale. If a question gets asked multiple times, it signifies a missed opportunity to capture past insight. It is essential to do this in a highly accessible place like a wiki, known and understood by all. It should be structured to match the immediate needs of those who need to read it. Dumping valuable information into chat is therefore an anti-pattern, requiring everyone to filter out the past nuggets of information based on the vague memory of them existing. A permanently updated record is a much better choice, and can serve as the central jumping off point to link to other, more ephemeral tools and resources. It should have every possible convenience for images, markup and app integration.

Unfortunately, few people will take the initiative on a blank canvas. There are two important reasons for this. The first is simply the bystander effect. If someone doesn't fill it out with placeholder outlines, clear instructions and pre-made templates, expect very little to happen organically. Make a place for project bibles, practical operations, one-time event organizing, etc. Also make sure you have a standard tool for diagramming, and some stencils for everything you draw frequently. It's invaluable, a picture says a thousand words. Encourage white board and paper sketching too, and editing them into other notes.

Second and more important is you need to get buy in on the intent and expected benefits. This is hard. The environment in some companies is so dysfunctional, some people have learned that meetings exist to waste time, and ticket queues exist to grow long and stale. They will pattern match sincere requests for participation to a request to waste their time. Or maybe they do appreciate those tools, but they've never been part of a development process where, by the time a ticket reaches a developer, the feature has been fully specced out and validated, and the bug is sufficiently analyzed and reproducible. To achieve this requires the design and QA team to have their own separate queues and tasks, as disciplined as the devs themselves.

Participants need to internalize that they can actually save everyone time, a tide that lifts all boats. It also translates into such luxuries as actually being able to take 2 weeks off without having to check your email. Fear of stepping on toes can prevent contributions from being attempted at all, so you should encourage the notion that the best critique comes in the form of additional proposed edits. Often, bad attempts at collaboration lead to a vicious cycle, where the few initiators burn out while reluctant non-participants feel helpless, until it gets abandoned.

In practice, swarm intelligence is a fickle thing. It can seem magical when things spontaneously come together, but often it's actually the result of some well spotted cow paths being paved, and a few helpful individuals picking up the slack to guide the group. You don't actually want an aimless mob, you want to have one or two captains per group, respected enough to resolve disputes and break ties. When done right, truly collaborative creation can be a wonderful thing, but most group dances require some choreography and practice. If your organization seems to magically run by itself regardless, consider you merely have no idea who's actually running it.

Legibility on Sale

In addition to day-to-day legibility of the workplace, there is a big need for accumulated legibility too. With so much communication now needing to be explicit rather than implicit, you run the risk of becoming incomprehensible to anyone who wasn't there from the start. If this becomes the norm, an unbridgeable divide forms between the old and the new guard, and the former group will only shrink, not grow.

A good antidote for this is to leverage the perspective of the newcomer. Any time someone new joins, they need to be onboarded, which means you are getting a free 3rd party audit of your processes. They will run into the stumbling blocks and pitfalls you step over without thinking. They will extract the information that nobody realizes only exists in everyone's heads. They will ask the obvious questions that haven't actually been written down yet, or even asked.

They should be encouraged to document their own learning process and document answers obtained. This is a good way to make someone feel immediately valued, and the perfect way to teach them early the right habits of your information ecosystem. You get to see what you look like from the outside, so pay attention, and you will learn all your blind spots.

Who are the staff and their roles and competences? How can I reach someone for this thing, and when are they available? What are our current ongoing projects and when are they due? What's our strategic timeline, and what's our budget? What's the process for vacations, or expenses? Remote work takes away a thousand tiny opportunities to learn all this by osmosis, and you need to actively compensate.

The resulting need for transparency may seem daunting, particularly if you need to document financial and legal matters. It can feel like dropping your pants for all to see, opening the floodgates to envy and drama to boot. It's a mistake however to consider it superfluous, because that gate is always open, whether you want it or not. If left unaddressed, it will be found out through gossip regardless, only you won't hear about any accumulated resentment until it's likely too late to resolve amicably.

It's also a red flag if someone doesn't want to document important discussions and negotiations. Like a boss who prefers to talk about performance or a raise entirely verbally and off-the-record, out of anyone else's earshot. Or a worker who can't account for their own hours or tasks, and pretends what they do is simply too complicated to explain. Such tight control of who hears what is never good, and means someone is positioning themselves to control information going up and down an organization entirely for their own benefit. However, as the cost of record keeping has been reduced to practically nothing, employees have a fair amount of power to push back. Everyone should be encouraged to ask for written terms for deals and promises, and keep their own copies of their history, including key negotiations and discussions. They should store this outside of accounts that can be locked out upon dismissal, or tampered with by a malicious inside actor.

I leave you with a trope, the beast that is the Big Vision Meeting. Usually something has gone wrong which casts doubt on the company's future, or which puts management in a bad light, or both. Likely people are being "let go". Before this news can be delivered, the bosses must save face. So they give a 1-3 hour PowerPoint which projects the company into the future for a year or two, and lays out how successful they will be. Crucially absent will be the specifics of how they will get there, and instead you will get abstract playbooks, colorful diagrams and "market research" or "financial analyses" that don't have any real numbers in it.

It's important to consider the perspective of the worker here: the minute the Big Meeting starts, they already know something is up, because it is always called without notice. Everything that is not critically urgent is immediately put on hold. So they have to sit through this possibly hours-long spiel, wondering the entire time how bad it actually is, while the bosses think they are elevating spirits, in a stunning failure of self-awareness. Finally they tell them, and then the meeting ends soon after, and the question they had the entire time was not answered: how are we going to get through the next 2 weeks, what's our plan here?

The worst of the worst will do this by asking the non-fired employees to come in an hour late, so they can fire the unlucky ones by themselves, without having to own up in front of everyone at the same time why they had to let them go. Certain types abhor this lack of image control. You'll learn to spot them quickly enough. My real point though is what this Big Vision Meeting looks like when everyone's remote: they can just break the news individually, selling it as a personal touch, and don't even have to tell the same story to everyone all at once. Sometimes learning to deal with a fully remote environment means taking on the role of an investigator and archivist. Keep that in mind.

The best way to capture the necessary mindset is that of Minimum Viable Bureaucracy: we need to make our tools and processes work for us, with a minimum amount of fuss for the maximum amount of benefit, without any illusions that the technology will simply do it for us. It can even save your bacon when the shit hits the fan.

That means engaging in things many workers are often averse to, like creating meeting agendas, writing concise and comprehensive documentation, taking notes, making archives, and much more. But once people clue in that this actually saves time and effort in the long run, they'll wonder how they ever got things done without it.

Or at least I do.

Edit: Apparently I'm not the first to come up with the term!

03 Sep 01:30

On My Funny Ideas About What Beta Means

John Gruber has mentioned, on The Talk Show, that I’ve got some weird ideas about what beta means.

Here are my definitions:

development (d): everything is in progress and the app might be completely unusable.

alpha (a): the app is feature-complete and has no known bugs — but, importantly, it’s had very little testing.

beta (b): the app is feature-complete, has no known bugs, and has been tested — but further testing is still warranted. Every beta is a release candidate.

These are defined in a NetNewsWire Technote. It’s important to have definitions that everybody working on or testing the app understands.

But why these rather strict definitions?

It’s part of our commitment to quality. What matters is the end result — the shipping app — and these definitions make sure we don’t get to beta, or even alpha, with the app up on the table with wires sticking out and pieces missing.

This gives us a big space between development and shipping, and that space is all about making sure the bugs are all fixed.

This is a matter of ethics and pride in our work. Absolutely.

But it’s also pragmatic. This is an open source app, written by volunteers in their spare time, and having this rhythm baked-in to the process helps make sure we can uphold our standards even without full-time developers, managers, and testers.

* * *

And… it bugs me how little real attention our industry pays to quality these days. In some cases the consequences are disastrous; in other cases they’re merely expensive. It doesn’t have to be this way.

If it seems like I’m going too far with my definitions, well, I’m trying to bend the stick here.

03 Sep 01:30

Imagine if we didn’t know how to use books – notes on a digital practices framework

Dave Cormier, Dave’s Educational Blog, Sept 02, 2019
Icon

It's hard to imagine not knowing what to do with a book, but that was in fact the case in the days before near universal literacy. Now we're in that position again, but with respect to digital technologies, says Dave Cormier. The comment comes in the context of presenting a dradft of "a model for preparing an education system for the internet" and Cormier suggests that "some people are never going to make it all the way to being ready to teach with or on the internet."The model offers four progressively more accomplished levels of digital literacy, from basic awareness, though use, creativity, and finally, teaching.

Web: [Direct Link] [This Post]
03 Sep 01:29

Ubiquitous tools, connected things and intelligent agents: Disentangling the terminology and revealing underlying theoretical dimensions

Katrin Etzrodt, Sven Engesser, First Monday, Sept 02, 2019
Icon

This paper describes "the progressive merging of invisible tools, working in the background (Ubiquitous Computing), integrated into every domain of life and connected to the Internet (Internet of Things), with independently thinking and acting actors (Artificial Intelligence), implemented both into environments (Ambient Intelligence), and into everyday objects (Smart Objects)." In so doing, it extracts "our dimensions of modern technologies: Connectivity, Invisibility, Awareness, and Agency." Interesting read.

Web: [Direct Link] [This Post]
03 Sep 01:29

“What makes a Learning Technologist?” – Part 1 of 4: Job titles

Daniel Scott, ALT, Sept 02, 2019
Icon

The headline may seem to suggest it's a job title that make a person a learning technologist, but that isn't the intent. Rather, the point is that learning technologists may have many different job titles. "34%(13) of respondents stated that they had Learning Technologist in their title, whilst 66% (25) had a different title, e.g. education, blended, designer or other that includes duties of a Learning Technologist." That seems like an odd place to start a series on "What makes a learning technologist" but I imagine the remaining articles of this series will have more substance.

Web: [Direct Link] [This Post]
03 Sep 01:29

Expert predicts 25% of colleges will "fail" in the next 20 years

CBS News, Sept 02, 2019
Icon

The number refers to colleges in the United States, which has a different system than most of the rest of the world. "They're going to close, they're going to merge, some will declare some form of bankruptcy to reinvent themselves. It's going to be brutal across American higher education." The article tells us only that the expert, Michael Horn, "studies education at Harvard University", but I would imagine it's this Michael Horn from the Christensen Institute, which may lead you to take the prediction with a grain of salt.

Web: [Direct Link] [This Post]
03 Sep 01:28

Bertrand Russell has a Near Death Experience - existentialcomics.com/comic/304 pic.twitter.com/KC8QigcK4d

by existentialcoms
mkalus shared this story from existentialcoms on Twitter.

Bertrand Russell has a Near Death Experience -
existentialcomics.com/comic/304 pic.twitter.com/KC8QigcK4d





387 likes, 70 retweets
03 Sep 01:28

Twitter Favorites: [Lesley_NOPE] Boba Chett! https://t.co/BD2GauurCb

Turns out I’m 💯 That Pitch @Lesley_NOPE
Boba Chett! pic.twitter.com/BD2GauurCb
03 Sep 01:25

Twitter Favorites: [heyrickie] I finished August with just over 100 km of running. I’m still on track (with some effort) to get to 1,000 km for th… https://t.co/4vwmvPuYJb

Eric Bucad @heyrickie
I finished August with just over 100 km of running. I’m still on track (with some effort) to get to 1,000 km for th… twitter.com/i/web/status/1…
03 Sep 01:25

Twitter Favorites: [jneilsonTO] Vancouver has information boards along its seawall describing protected view corridors and their intent. First time… https://t.co/2qsXCQ78ZI

James Neilson @jneilsonTO
Vancouver has information boards along its seawall describing protected view corridors and their intent. First time… twitter.com/i/web/status/1…
03 Sep 01:25

Twitter Favorites: [normwilner] This is a public apology to @rizmc, who told me Lahore Tikka House was his favorite Toronto curry spot like five ye… https://t.co/BINaGYDLZ8

Norm Wilner @normwilner
This is a public apology to @rizmc, who told me Lahore Tikka House was his favorite Toronto curry spot like five ye… twitter.com/i/web/status/1…
02 Sep 00:50

TTC's long, bendy streetcars to carry riders for last time on Labour Day

mkalus shared this story .

The last two of Toronto's "bendy" streetcars have reached the end of their working life, but the TTC is pressing both into service on Labour Day one last time before they officially retire.

"We have squeezed as much as we can out of these great streetcars and the final two will be out of service as of tomorrow," Mike DeToma, spokesperson for the Toronto Transit Commission, said on Sunday.

The streetcars, known as ALRVs, or articulated light rail vehicles, will rumble and clang into service on Monday from 2 p.m. to 5 p.m. on Queen Street.

They will offer free rides westbound and eastbound, between the TTC's Russell Carhouse, near Queen Street East and Greenwood Avenue, and the Wolseley Street Loop, near Bathurst Street and Queen Street West. 

Originally part of a fleet of 52, the streetcars have been transporting TTC passengers for more than 30 years. The TTC introduced the bendy streetcars into service in 1988.

One to be kept in storage for posterity

One of the last two will be refurbished and kept in TTC storage for historical purposes, to be wheeled out for special tours and charters, while the other will be salvaged for parts and then scrapped. The TTC retired the other 50 in the fleet in the last two years.

"It will be the last day the folks of Toronto will see our original articulated streetcars in service," DeToma said. 

"It is absolutely the end of an era. It's been 30-plus years since they first came into service, but they have finally reached the end of the line," he added.

"Anyone who is really interested in being a part of history, getting a last opportunity to take a ride on these original articulated streetcars, tomorrow between 2 and 5 on Queen Street is the time to do it."

The streetcars will enter their golden years as the TTC continues to acquire low-floor vehicles to modernize its streetcar fleet.

"They were the first articulated streetcars that the TTC purchased. These were extended streetcars, twice the length of the CLRV (Canadian Light Rail Vehicle) streetcars, the ones that are being retired slowly from service as the new low-floor vehicles come online," he said. 

"These articulated streetcars were the first bendy streetcars that Toronto citizens saw in service. They were, at the time, modern streetcars, full of electronics, new gadgetry. And after 30 years of service, they are going to be finally retired."

Over the years, the streetcars were repaired and rebuilt.

Vehicles allowed TTC to boost capacity on busy routes

In a news release, the TTC said the vehicles were "considered a landmark achievement at the time, allowing the TTC to increase capacity on its busiest routes at a time ridership was increasing."

One of the streetcars will make an appearance in Toronto's Labour Day Parade, DeToma said.

According to the TTC, Car 4204 will leave from the Russell Carhouse at 2 p.m., heading east to Wolseley Loop and back. Meanwhile, Car 4207 will leave from Wolseley Loop at about the same time, although road closures and CNE traffic could affect when it hits the road.

The TTC said the two streetcars will make return trips between the Russell Carhouse and the Wolseley Loop until 5 p.m. The very last run will leave Wolseley at about 4:15 p.m., arriving at the Russell Carhouse at 5 p.m.

"It will be back and forth between those destinations," he said.

The CLRV streetcars, the shorter version of the articulated streetcars, will likely be retired at the end of the year, he added.

The first ALRV entered service on Jan. 19, 1988 on the 507 Long Branch route, the TTC said.