On the surface, driving a car might seem fairly straightforward. Follow the rules of the road, don’t crash, and watch out for others. So why not just let a computer do all of the work? The Washington Post provides an interactive simulator to put you in the passenger seat and see for yourself.
by Justin Vassallo, Thais Wilson-Soler, and Daniel Varghese
There’s a reason pour-over is the preferred brewing method at high-end cafés: It’s a simple way to make coffee with intricate flavors that you might not get from a machine. For the best results, start with a good dripper. After brewing more than 150 cups, we found the Kalita Wave 185 Dripper to be the most consistent and easy to master. If you want to brew a truly great cup, we also recommend a grinder, kettle, and scale.
This is a long long looooong text (900 page PDF) but so good. It takes its time, explains clearly, and stays well within a traditional perspective, which is what we want from a textbook. Thus we read to the end of page 68 before we have finished defining philosophy as "the personal search for truth, in any field, by rational means." We then have another 95 pages defining 'computer science' (including some side-discussion on whether we should be drawing sharp boundaries in general, and a detailed consideration of whether computer science is a type of magic). Do look at the five key insights (pp148-149). So what is computer science? Essentially, "to capture the messy complexity of the natural world and express it algorithmically." That sets up the next question, "what is science," and so on. I don't think you could ask for more from such a text, really, not even brevity. Image: XKCD, also on p. 318.
If you have an iPhone and you want a smartwatch, there’s no reason to choose anything but an Apple Watch. It’s the best way to keep up with notifications, track fitness, get directions, and use apps without having to constantly reach for your phone.
Pines and firs are dying across the Pacific Northwest, fires rage across the Amazon, it’s the hottest it’s ever been in Paris—climate change is impacting the whole planet, and things are not getting any better.
You want to do something about climate change, but you’re not sure what.
If you do some research you might encounter an essay by Bret Victor—What can a technologist do about climate change?
There’s a whole pile of good ideas in there, and it’s worth reading, but the short version is that you can use technology to “create options for policy-makers.”
Thing is, policy-makers aren’t doing very much.
So this essay isn’t about technology, because technology isn’t the bottleneck right now, it’s about policy and politics what you can do about it.
It’s still written for software developers, because that’s who I write for, but also because software developers often have access to two critical catalysts for political change.
And it’s written for software developers in the US, because that’s where I live, and because the US is a big part of the problem.
But before I go into what you can do, let me tell you the story of a small success I happened to be involved in, a small step towards a better future.
Infrastructure and the status quo
About a year ago I spent some of my mornings handing out pamphlets to bicycle riders.
I looked like an idiot: in order to show I was one of them I wore my bike helmet, which is weirdly shaped and the color of fluorescent yellow snot.
After finding an intersection with plenty of bicycle riders and a long red light that forces them to stop, I would do the following:
When the light turns red, step into the street and hand out the pamphlet.
Keep an eye out for the light changing to green so that I didn’t get run over by moving cars.
Twiddle my thumbs waiting for the next light cycle.
It was boring, and not very glamorous.
I was one of just many volunteers, and besides gathering signatures we also held rallies, had conversations with city councilors and staff, wrote emails, talked at city council meetings—it was a process.
The total effort took a couple of years (and I only joined in towards the end)—but in the end we succeeded.
We succeeded in having the council pass a short ordinance, a city-level law in the city of Cambridge, Massachusetts.
The ordinance states that whenever a road that was supposed to have protected bike lanes (per the city’s Bike Plan) was rebuilt from scratch, it would have those lanes built by default.
Now, clearly this ordinance isn’t going to solve climate change.
In fact, nothing Cambridge does as a city will solve climate change, because there’s only so much impact 100,000 people can have on greenhouse gas emissions.
But while in some ways this ordinance was a tiny victory in a massive war, if we take a step back it’s actually more important than it seems.
In particular, this ordinance has three effects:
Locally, safer bike infrastructure means more bicycle riders, and fewer car drivers. That reduces emissions—a little.
Over time, more bicycle riders can kick off a positive feedback cycle, reducing emissions even more.
Most significantly, local initiatives spread to other cities—kicking off these three effects in those other cities.
Let’s examine these effects one by one.
Effect #1: Fewer cars, less emissions
About 43% of the greenhouse gas emissions in Massachusetts are due to transportation; for the US overall it’s 29% (ref).
And that means cars.
The reason people in the US mostly drive cars is because all the transportation infrastructure is built for cars.
No bike lanes, infrequent, slow and non-existent buses, no trains…
Even in cities, where other means of transportation are feasible, the whole built infrastructure sends the very strong message that cars are the only reasonable way to get around.
If we focus on bicycles, our example at hand, the problem is that riding a bicycle can be dangerous—mostly because of all those cars!
But if you get rid of the danger and build good infrastructure—dedicated protected bike lanes that separate bicycle riders from those dangerous cars—then bicycle use goes up.
Consider what Copenhagen achieved between 2008 and 2017 (ref):
2008
2018
# of seriously injured cyclists
121
81
% who residents who feel secure cycling
51
77
% who cycle to work/school
37
49
With safer infrastructure for bicycles, perception of safety goes up, and people bike more and drive less.
Similarly, if you have frequent, fast, and reliable buses and trains, people drive less.
And that means less carbon emissions.
In Copenhagen the number of kilometers driven by cars was flat or slightly down over those 10 years—whereas in the US, it’s up 6-7% (ref).
Effect #2: A positive feedback loop
The changes in Copenhagen are a result of a plan the city government there adopted in 2011 (ref): they’re the result of a policy action.
And the political will was there in part because there were already a huge number of bicycle riders.
So it’s a positive feedback loop, and a good one.
Let’s see how this is happening in Cambridge:
Cambridge has a slowly growing number of bicycle rider.
This means more political support for bike infrastructure—if there’s a group that can mobilize that support!
With the ordinance, more roads will have safe infrastructure.
For example, one neighborhood previously had a safe route only in one direction; the other direction will be rebuilt with a protected bike lane in 2020.
With safer infrastructure, there will be more bicycle riders, and therefore more support by residents for safer infrastructure.
Merely having support isn’t enough, of course, and I’ll get back to that later on.
If Copenhagen can reach 50% of residents with a bicycle commute, so can Cambridge—and the ordinance is a good step in that direction.
Effect #3: The idea spreads
The Cambridge ordinance passed in April 2019—and the idea is spreading elsewhere:
The California State Assembly is voting on a law with similar provisions (ref), through a parallel push by Calbike.
In May 2019 a Washington DC Council member introduced a bill which among other points has the same rebuild requirements as the Cambridge ordinance (ref).
The Seattle City Council passed an ordinance, parts of which were literally copy/pasted from the Cambridge ordinance (ref).
All of this is the result of local advocacy—but I’ve no doubt Cambridge’s example helped.
It’s always easier to be the second adopter.
And the examples from these larger localities will no doubt inspire other groups and cities, spreading the idea even more.
Change requires politics
Bike infrastructure is just an example, not a solution—but there are three takeaways from this story that I’d like to emphasize:
If you want to change policy, you need to engage in politics.
Politics are easier to impact on the local level.
Local policy changes have a cumulative, larger-scale impact.
By politics I don’t just mean having an opinion or voting for a candidate, but rather engaging in the process of how policy decisions are made.
Merely having an opinion doesn’t change anything.
For example, two-thirds of Cambridge residents support building more protected bike lanes (ref).
But that doesn’t mean that many protected lanes are getting built—the neighboring much smaller city of Somerville is building far more than Cambridge.
The only reason the city polled residents about bike lanes is because, one suspects, all the fuss we’d been making—emails, rallies, meetings, city council policy orders—made the city staff wonder if bike infrastructure really had a lot of public support or not.
Voting results in some change, but not enough.
Elected officials and government staff have lots and lots of things to worry about—if they’re not being pressured to focus on a particular issue, it’s likely to fall behind.
What’s more, the candidates you get to vote for have to get on the ballot, and to do that they need money (for advertising, hiring staff, buying supplies).
Lacking money, they need volunteer time.
And it’s much easier for a small group of rich people to provide that support to the candidates they want—so by the time you’re voting, you only get to choose between candidates that have been pre-vetted (I highly recommend reading The Golden Rule to understand how this works on a national level).
What you can do: Become an activist
In the end power is social.
Power comes from people showing up to meetings, people showing up for rallies, people going door-to-door convincing other people to vote for the right person or support the right initiative, people blocking roads and making a fuss.
And that takes time and money.
So if you want to change policy, you need to engage in politics, with time and money:
You can volunteer for candidates’ political campaigns, as early as possible in the process.
Too many good candidates get filtered out before they even make the ballot.
That doesn’t mean you can just go home after the election—that’s when the real work of legislation starts, which means activism is just as important.
You can volunteer with groups either acting on a particular issue (transportation, housing policy) or more broadly on climate change.
Also useful is donating money to political campaigns, both candidates and issue-based organizations.
Here are some policies you might be interested in:
Transportation policy determines what infrastructure is built—and the current infrastructure favors privately-owned cars over public transportation and bicycles.
Zoning laws determine what gets built and where.
Denser construction would reduce the need for long trips, and more efficient buildings (ideally net zero carbon) would reduce emissions from heating and cooling.
Moving utilities from private to public ownership, so they can focus on the public good and not on profit.
Bulk municipal contracts for electricity: this allows for cheaper electricity for all residents, and to have green energy as the default.
State-level carbon restrictions or taxes.
Where you should do it: Start local
If you are going to become an activist, the local level is a good starting point.
An easier first step: Cambridge has 100,000 residents—city councilors are routinely elected with just 2500 votes.
That means impacting policies here is much easier than at a larger scale.
Not only does this mean faster results, it also means you’re less likely to get discouraged and give up—you can see the change happening.
Direct impact: A significant amount of greenhouse gas emissions in the US are due to causes that are under control of local governments.
Wider impact: As in the case of Cambridge’s ordinance, local changes can be adopted elsewhere.
Of course, local organizing is just the starting point for creating change on the global level.
But you have to start somewhere.
And global change is a lot easier if you have thousands of local organizations supporting it.
It’s a good to be a software developer
Let’s get back to our starting point—you’re paid to write software, you want to do something about climate change.
As a software developer you likely have access to the inputs needed to make political campaigns succeed—both candidate-based and issue-based:
Money: Software developers tend to get paid pretty well, certainly better than most Americans.
Chances are you have some money to spare for political donations.
Time: This one is a bit more controversial, but in my experience many programmers can get more free time if they want to.
If you don’t have children or other responsibilities, you can work a 40-hour workweek, leaving you time for other things.
Before I got married I worked full-time and went to a local adult education college half-time in the evenings: it was a lot of work, but it was totally doable.
Set boundaries at your job, and you’ll have at least some free time for activism.
If you need help doing it yourself, I’ve written a book to help you negotiate a shorter workweek.
If you want to negotiate a shorter workweek so you have time for political activism, you can use the code FIGHTCLIMATECHANGE to get the book for 60% off.
Some common responses
“There will never be the political will to make this happen”
Things do change, for better and for worse, and sometimes unexpectedly.
To give a couple of examples:
In Ireland, the Catholic Church went from all-powerful to losing badly, most recently with Ireland legalizing abortion.
The anti-gay-marriage Defense of Marriage Act was passed by veto-proof majorities of Congress in 1996—and eight years later in 2004 the first legal gay marriage took place right here in Cambridge, MA.
The timelines for gay marriage and cannabis legalization in the US are illuminating: these things didn’t just happen, it was the result of long, sustained activist efforts, much of it at the local level.
Local changes do make a difference.
“Politics is awful and broken”
So are all our software tools, and somehow we manage to get things done!
“I don’t like your policy suggestions, we should do X instead”
No problem, find the local groups that promote your favorite policies and join them.
“The necessary policies will never work because of problem Y”
Same answer: join and help the local groups working on Y.
“It’s too late, the planet is doomed no matter what we do”
Perhaps, but it’s very hard to say.
So we’re in Pascal’s Wager territory here: given even a tiny chance there is something we can do, we had better do our best to make it happen.
And even if humanity really is doomed, there’s always the hope that someday a hyperintelligent species of cockroach will inherit the Earth.
And when cockroach archaeologists try to reconstruct our history, I would like them to be able to say, loosely translated from their complex pheromone-and-dancing system of communication: “These meatsacks may not have been as good at surviving as us cockroaches—but at least they tried!”
Time to get started
If you find this argument compelling—that policy is driven by power, and that power requires social mobilization—then it’s up to you to take the next step.
Find a local group or candidate pushing for a policy you care about, and show up for the next meeting.
And the meeting after that.
And then go to the rally.
And knock on doors.
And make some friends, and make some changes happen.
Some of the work is fun, some of it is boring, but there’s plenty to do—time to get started!
Tired of scrambling to get your job done?
If you were productive enough, you could take the afternoon off, confident you’d produced high value work. Not to mention having an easier time finding a new job when you need one.
Slightly confusing announcement from Google: they're introducing rel=ugc and rel=sponsored in addition to rel=nofollow, and will be treating all three values as "hints" for their indexing system. They're very unclear as to what the concrete effects of these hints will be, presumably because they will become part of the secret sauce of their ranking algorithm.
I’m going to spend the year thinking about and working on tools for data journalism. More details below, but the short version is that I want to help make the kind of data reporting we’re seeing from well funded publications like the New York Times, the Washington Post and the LA Times more accessible to smaller publications that don’t have the budget for full-time software engineers.
I’ve worked with newspapers a few times in the past: I helped create what would later become Django at the Lawrence Journal-World fifteen years ago, and I spent two years working on data journalism projects at the Guardian in London before being sucked into the tech startup world. My Datasette project was inspired by the challenges I saw at the Guardian, and I’m hoping to evolve it (and its accompanying ecosystem) in as useful a way as possible.
This fellowship is a chance for me to get fully embedded back in that world. I could not be more excited about it!
Here’s the part of my fellowship application (written back in January) which describes what I’m hoping to do. The program is extremely flexible and there is plenty of opportunity for me to change my focus if something more useful emerges from my research, but this provides a good indication of where my current thinking lies.
What is your fellowship proposal?
Think of this as your title or headline for your proposal. (25 words or less)
How might we grow an open source ecosystem of tools to help data journalists collect, analyze and publish the data underlying their stories?
Now, tell us more about your proposal. Why is it important to the challenges facing journalism and journalists today? How might it create meaningful change or advance the work of journalists? (600 words or less)
Data journalism is a crucial discipline for discovering and explaining true stories about the modern world - but effective data-driven reporting still requires tools and skills that are still not widely available outside of large, well funded news organizations.
Making these techniques readily available to smaller, local publications can help them punch above their weight, producing more impactful stories that overcome the challenges posed by their constrained resources.
Tools that work for smaller publications can work for larger publications as well. Reducing the time and money needed to produce great data journalism raises all boats and enables journalists to re-invest their improved productivity in ever more ambitious reporting projects.
Academic journals are moving towards publishing both the code and data that underlies their papers, encouraging reproducibility and better sharing of the underlying techniques. I want to encourage the same culture for data journalism, in the hope that “showing your working” can help fight misinformation and improve reader’s trust in the stories that are derived from the data.
I would like to use a JSK fellowship to build an ecosystem of data journalism tools that make data-driven reporting as productive and reproducible as possible, while opening it up to a much wider group of journalists.
At the core of my proposal is my Datasette open source project. I’ve been running this as a side-project for a year with some success: newspapers that have used it include the Baltimore Sun, who used it for their public salary records project: https://salaries.news.baltimoresun.com/. By dedicating myself to the project full-time I anticipate being able to greatly accelerate the pace of development and my ability to spend time teaching news organizations how to take advantage of it.
More importantly, the JSK fellowship would give me high quality access to journalism students, professors and professionals. A large portion of my fellowship would be spent talking to a wide pool of potential users and learning exactly what people need from the project.
I do not intend to be the only developer behind Datasette: I plan to deliberately grow the pool of contributors, both to the Datasette core project but also in developing tools and plugins that enhance the project’s capabilities. The great thing about a plugin ecosystem is that it removes the need for a gatekeeper: anyone can build and release a plugin independent of Datasette core, which both lowers the barriers to entry and dramatically increases the rate at which new functionality becomes available to all Datasette users.
My goal for the fellowship is to encourage the growth of open source tools that can be used by data journalists to increase the impact of their work. My experience at the Guardian lead me to Datasette as a promising avenue for this, but in talking to practitioners and students I hope to find other opportunities for tools that can help. My experience as a startup founder, R&D software engineer and an open source contributor put me in an excellent position to help create these tools in partnership with the wider open source community.
> Does the world need a skeuomorphic notebook
> app anymore?
I think that next year we'll see the beginning of a potentially mass migration of the better iPad apps to macOS, using Apple's Catalyst framework announced this summer at WWDC. It's rough around the edges now, but I think that there are some iPad notebook apps that are really excellent, and if they become cross-platform with shared code in Mac versions, the ability to use the Pencil on the iPad and share screens using Apple's new Sidecar functionality, then they'll become deadly serious competition for NoteTaker (especially since the apps' development are mainly being funded from iOS sales). I'm thinking specifically of apps like Goodnotes, Noteshelf and My Script Nebo.
We were joined at timing by Francesco and Giorgio. When the winds are high enough, you can use Giorgio’s hair as a wind gauge.
Heat 1: Josh ran 63.99 mph, but the wind kicked up as he neared the traps.
Yasmin crashed Arion 5. Word is that they are going to switch away from the tubeless tires that they have been using up until now. At any rate, they have scratched out of their runs for tomorrow morning.
Heat 2 was scratched in its entirety due to high winds.
This is what timers do when they are bored.
For Heat 3, Georgio was counselling Andrea on the radio not to run due to a heavy headwind. Lots of words including “molto” Francesco described the wind as “too much dot a lot”. Nevertheless, Andrea chose to run anyway, and came screaming through the traps at 33 mph.
Ishtey also chose to run and ran 51.29 i.e. faster than Andrea.
All this trouble for the shortest timing tape of the week thus far.
Results
Winds look better for tomorrow.
Hans fixed Adam’s wheel so that he can run tomorrow.
For some reason Policumbent is closing out the evening by showing Vittoria as many WHPSC crash videos that they can find on youtube, starting with the glowworm video.
There are kids now who are old enough to go fight in wars that were justified back when they still had their umbilical cords attached. For them, the attacks are only history; that's all they ever were.
I can sort of understand it. I was born after Vietnam, after Watergate. But my whole childhood, these were the points of reference, the milestones used to justify choices or underpin arguments. They were totems, symbols, history. Only history, not something real.
And these days I spend a lot of time with people for whom 9/11 is only history. Someone who is as old today as I was that day would probably have been shielded from knowing too much at the time. They'd have been as young as my son is today, and I still shield him from the full weight of my memories of that day.
Frankly, that's hard for me to process. We still ground so much of our social and political rhetoric in that day, so it's easy for me to forget that for many (for most?), it's only a signifier, with none of the visceral evocations of memory. I remember how 9/11 smelled. I remember how my city smelled, for months. I can still look up at a clear blue sky with the slightest breeze in the air and imagine that morning, and remember when the wind shifted and the plume came over our neighborhood.
So, it's up to me to let it go. To let it become history. I don't mind that younger people don't have the same reverence for the moment. Maybe I'm kind of glad they don't. I can see now what I couldn't see then. Those of us who witnessed that day, even if only in a small way, will fade from the cultural conversation. When I was young, you would still hear people talk about D Day, or Pearl Harbor. Being young and insolent, I never had much patience for listening to the stories. Young people don't have reverence for these moments.
I guess we said we'd never forget. But the consensus around grief, around memorial, around observance — that fragile fiction faded by the time the smell dissipated. We didn't get around to deciding what it was, exactly, that we were Never Forgetting.
And so, we ended up with this legacy. There are ritualized remembrances, largely led by those who weren't there, those who mostly hate the values that New York City embodies. The sharpest memories are of the goals of those who masterminded the attacks. It's easy enough to remember what they wanted, since they accomplished all their objectives and we live in the world they sought to create. The empire has been permanently diminished. Never Forget.
But I still maintain a little bit of hope. Because just as a new generation doesn't revere the memories of those of us okd enough to bear witness, they are also not despairing over the failures of our follow-up. This is the world they've always lived in, the starting point they were always given. So there was never anything other than moving forward from that day.
In Past Years
Each year I write about the attacks on this anniversary, to reflect both on that day and where I'm at right now. I also deeply appreciate the conversation that ensues with those who check in with me every year.
I spent so many years thinking “I can’t go there” that it caught me completely off guard to realize that going there is now routine. Maybe the most charitable way to look at it is resiliency, or that I’m seeing things through the eyes of my child who’s never known any reality but the present one. I'd spent a lot of time wishing that we hadn't been so overwhelmed with response to that day, so much that I hadn''t considered what it would be like when the day passed for so many people with barely a notice at all.
So, like ten years ago, I’m letting go. Trying not to project my feelings onto this anniversary, just quietly remembering that morning and how it felt. My son asked me a couple of months ago, “I heard there was another World Trade Center before this one?” and I had to find a version of the story that I could share with him. In this telling, losing those towers was unimaginably sad and showed that there are incredibly hurtful people in the world, but there are still so many good people, and they can make wonderful things together.
I don’t dismiss or deny that so much has gone so wrong in the response and the reaction that our culture has had since the attacks, but I will not forget or diminish the pure openheartedness I witnessed that day. And I will not let the cynicism or paranoia of others draw me in to join them.
What I’ve realized, simply, is that 9/11 is in the past now.
For the first time, I clearly felt like I had put the attacks firmly in the past. They have loosened their grip on me. I don’t avoid going downtown, or take circuitous routes to avoid seeing where the towers once stood. I can even imagine deliberately visiting the area to see the new train station.
There’s no part of that day that one should ever have to explain to a child, but I realized for the first time this year that, when the time comes, I’ll be ready. Enough time has passed that I could recite the facts, without simply dissolving into a puddle of my own unresolved questions. I look back at past years, at my own observances of this anniversary, and see how I veered from crushingly sad to fiercely angry to tentatively optimistic, and in each of those moments I was living in one part of what I felt. Maybe I’m ready to see this thing in a bigger picture, or at least from a perspective outside of just myself.
I thought in 2001 that some beautiful things could come out of that worst of days, and sure enough, that optimism has often been rewarded. There are boundless examples of kindness and generosity in the worst of circumstances that justify the hope I had for people’s basic decency back then, even if initially my hope was based only on faith and not fact.
But there is also fatigue. The inevitable fading of outrage and emotional devastation into an overworked rhetorical reference point leaves me exhausted. The decay of a brief, profound moment of unity and reflection into a cheap device to be used to prop up arguments about the ordinary, the everyday and the mundane makes me weary. I’m tired from the effort to protect the fragile memory of something horrific and hopeful that taught me about people at their very best and at their very, very worst.
These are the gifts our children, or all children, give us every day in a million different ways. But they’re also the gifts we give ourselves when we make something meaningful and beautiful. The new World Trade Center buildings are beautiful, in a way that the old ones never were, and in a way that’ll make our fretting over their exorbitant cost seem short-sighted in the decades to come. More importantly, they exist. We made them, together. We raised them in the past eleven years just as surely as we’ve raised our children, with squabbles and mistakes and false starts and slow, inexorable progress toward something beautiful.
I don’t have any profound insights or political commentary to offer that others haven’t already articulated first and better. All that I have is my experience of knowing what it mean to be in New York City then. And from that experience, the biggest lesson I have taken is that I have the obligation to be a kinder man, a more thoughtful man, and someone who lives with as much passion and sincerity as possible. Those are the lessons that I’ll tell my son some day in the distant future, and they’re the ones I want to remember now.
[T]his is, in many ways, a golden era in the entire history of New York City. Over the four hundred years it’s taken for this city to evolve into its current form, there’s never been a better time to walk down the street. Crime is low, without us having sacrificed our personality or passion to get there. We’ve invested in making our sidewalks more walkable, our streets more accommodating of the bikes and buses and taxis that convey us around our town. There’s never been a more vibrant scene in the arts, music or fashion here. And in less than half a decade, the public park where I got married went from a place where I often felt uncomfortable at noontime to one that I wanted to bring together my closest friends and family on the best day of my life. We still struggle with radical inequality, but more people interact with people from broadly different social classes and cultures every day in New York than any other place in America, and possibly than in any other city in the world.
And all of this happened, by choice, in the years since the attacks.
[T]his year, I am much more at peace. It may be that, finally, we’ve been called on by our leadership to mark this day by being of service to our communities, our country, and our fellow humans. I’ve been trying of late to do exactly that. And I’ve had a bit of a realization about how my own life was changed by that day.
Speaking to my mother last week, I offhandedly mentioned how almost all of my friends and acquaintances, my entire career and my accomplishments, my ambitions and hopes have all been born since September 11, 2001. If you’ll pardon the geeky reference, it’s as if my life was rebooted that day and in the short period afterwards. While I have a handful of lifelong friends with whom I’ve stayed in touch, most of the people I’m closest to are those who were with me on the day of the attacks or shortly thereafter, and the goals I have for myself are those which I formed in the next days and weeks. i don’t think it’s coincidence that I was introduced to my wife while the wreckage at the site of the towers was still smoldering, or that I resolved to have my life’s work amount to something meaningful while my beloved city was still papered with signs mourning the missing.
Finally getting angry myself, I realize that nobody has more right to claim authority over the legacy of the attacks than the people of New York. And yet, I don’t see survivors of the attacks downtown claiming the exclusive right to represent the noble ambition of Never Forgetting. I’m not saying that people never mention the attacks here in New York, but there’s a genuine awareness that, if you use the attacks as justification for your position, the person you’re addressing may well have lost more than you that day. As I write this, I know that parked out front is the car of a woman who works in my neighborhood. Her car has a simple but striking memorial on it, listing her mother’s name, date of birth, and the date 9/11/2001.
On the afternoon of September 11th, 2001, and especially on September 12th, I wasn’t only sad. I was also hopeful. I wanted to believe that we wouldn’t just Never Forget that we would also Always Remember. People were already insisting that we’d put aside our differences and come together, and maybe the part that I’m most bittersweet and wistful about was that I really believed it. I’d turned 26 years old just a few days before the attacks, and I realize in retrospect that maybe that moment, as I eased from my mid-twenties to my late twenties, was the last time I’d be unabashedly optimistic about something, even amidst all the sorrow.
[O]ne of the strongest feelings I came away with on the day of the attacks was a feeling of some kind of hope. Being in New York that day really showed me the best that people can be. As much as it’s become cliché now, there’s simply no other way to describe a display that profound. It was truly a case of people showing their very best nature.
We seem to have let the hope of that day go, though.
I saw people who hated New York City, or at least didn’t care very much about it, trying to act as if they were extremely invested in recovering from the attacks, or opining about the causes or effects of the attacks. And to me, my memory of the attacks and, especially, the days afterward had nothing to do with the geopolitics of the situation. They were about a real human tragedy, and about the people who were there and affected, and about everything but placing blame and pointing fingers. It felt thoughtless for everyone to offer their response in a framework that didn’t honor the people who were actually going through the event.
I don’t know if it’s distance, or just the passing of time, but I notice how muted the sorrow is. There’s a passivity, a lack of passion to the observances. I knew it would come, in the same way that a friend told me quite presciently that day back in 2001 that “this is all going to be political debates someday” and, well, someday’s already here.
I spent a lot of time, too much time, resenting people who were visiting our city, and especially the site of the attacks, these past two years. I’ve been so protective, I didn’t want them to come and get their picture taken like it was Cinderella’s Castle or something. I’m trying really hard not to be so angry about that these days. I found that being angry kept me from doing the productive and important things that really mattered, and kept me from living a life that I know I’m lucky to have.
[I]n those first weeks, I thought a lot about what it is to be American. That a lot of people outside of New York City might not even recognize their own country if they came to visit. The America that was attacked a year ago was an America where people are as likely to have been born outside the borders of the U.S. as not. Where most of the residents speak another language in addition to English. Where the soundtrack is, yes, jazz and blues and rock and roll, but also hip hop and salsa and merengue. New York has always been where the first fine threads of new cultures work their way into the fabric of America, and the city the bore the brunt of those attacks last September reflected that ideal to its fullest.
I am physically fine, as are all my family members and immediate friends. I’ve been watching the footage all morning, I can’t believe I watched the World Trade Center collapse…
I’ve been sitting here this whole morning, choking back tears… this is just too much, too big. I can see the smoke and ash from the street here. I have friends of friends who work there, I was just there myself the day before yesterday. I can’t process this all. I don’t want to.
Sun not really out yet in this area, but she was...
During my month off in August I journeyed to Africa with a couple of friends on a low-key, no expectations trip.
Okay, you never have no expectations when you go to Africa to shoot wildlife, but I've got enough images in my files from 25 years of doing this that I can just go and enjoy what happens, not need something particular to happen.
First, sending a mass email to your current audience to attract a new audience is clearly dumb. The audience you’re trying to reach won’t see it and the audience which receives it will find it irrelevant.
Second, I doubt ‘young Londoners’ refer to themselves as ‘young Londoners’. Instead they are a collection of dozens, even hundreds, of smaller sub-groups within the city, each with unique identities and unique needs.
You attract them by spending time with each audience and learning exactly what they need and how they communicate with each other (ideally, you would want members of the target group to write the email which gets sent only to other members).
Third, privacy policies, reporting functions, and preview features are far less exciting than whatever the audience can do on YouTube, Facebook, and whatever is on TV right now. You’re not competing with how the community used to be, you’re competing against whatever is the most exciting and interesting things the audience can do this minute.
If you want any audiences, and perhaps especially young audiences, to share ideas about the future of London, I’d suggest making it deeply personal to them. What do they want their futures to look like? What does the city need to provide for them to make that happen? That’s how you get better ideas and feedback.
Only send emails to the specific target audience with the most exciting updates which help them achieve their goals. Anything else is a waste.
At Apple’s annual iPhone event in Cupertino, California, the company revealed the new iPhone 11.
The phone follows in the footsteps of last year’s iPhone XR and notably adds another rear-facing camera and two new colour options — ‘Purple’ and ‘Green.’
Beyond this, the phone is also rocking Apple’s A13 Bionic chip. The company says this is the fasted chip in any smartphone.
The iPhone 11 features the same 6.1-inch 1792 x 828 Liquid Retina LCD display included in the iPhone XR. Beyond the new green and purple hues, the phone also comes in new ‘White,’ ‘Yellow,’ ‘Blue’ and ‘Black,’ plus the usual ‘Product Red’ offering.
Apple is using something called spacial audio to help make the iPhone sound better when you’re watching video on it.
The iPhone 11 is tougher this time around, too, since Apple says the device is made out of stronger glass than last year’s model. The XR is also IP68 water and dustproof rating, allowing the iPhone 11 to be submerged up to two metres for up to 30 minutes.
The dual-camera system also includes a new wide camera. Apple says that overall, the camera is capable of focusing much faster than before. Plus, there’s an ultra-wide camera with a 120-degree field of view. With the broader lens, Apple has revamped the camera app to hint that users can take wide shots since the edges of the frame are always within view. Both lenses are 12-megapixels.
You can even use the ultra-wide camera when you’re taking video. Speaking of video, Apple has upgraded its stabilization technology to make videos appear more stable.
Further, there’s a new video feature is called ‘Quick Take.’ When you’re in the camera app’s photo mode, you can now hold on the shutter button to start shooting video. Another new feature called ‘Deep Fusion’ uses both cameras at the same time to optimize an image to capture more texture and detail. Deep Fusion is set to launch this fall.
Apple is utilizing technology called semantic rendering to enable the iPhone 11’s smart HDR, which determines lighting conditions across several aspects of the image.
Since there are two cameras, users can take ‘Portrait Mode’ photos of things optically similar to last year’s iPhone XS models.
To keep up with Google, Apple has added a ‘Night Mode’ to the iPhone 11. A significant amount of behind the scenes processing powers this feature to improve the iPhone 11’s low-light performance.
The front TrueDepth camera has also been upgraded to 12-megapixels. To switch to the wider angle shooter, all you need to do is turn the camera into landscape mode. On top of the wider lens, Apple has added a slow-motion mode to the front shooter. The company called taking these photos “Slofies” during the keynote.
The Face ID aspect of the front sensor has also been improved to work at more angles and is 30 percent faster, according to Apple.
The company also boosted the phone’s battery life by another hour when compared to the iPhone XR.
Finally, a feature Apple didn’t mention on stage was a new co-processor called the U1. This chip uses ultrawideband technology that similar to a small scale GPS. This makes iPhones sense each other better when they’re close together.
The iPhone 11 starts at $979 CAD for the 64GB model, $1,049 for the 128GB model and $1,189 for the 256GB option.
Pre-orders begin on September 13th, with the phone going on sale on September 20th.
Apple mentioned that anyone who buys the new iPhone 11 gets Apple TV+ for free for a year as long as the offer has been redeemed within three months of purchasing a device.
Apple has unveiled its trio of new iPhones: the iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max.
Specs
Availability
The iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max are set to launch on September 20th. Pre-orders for the devices start on September 13th at 8:00am EST.
Pricing
The iPhone 11 Pro starts at $1,379 CAD for 64GB variant, $1,589 for the 256GB model and the 512GB variant costs $1,859.
Meanwhile, the 6.5-inch iPhone 11 Pro Max starts at $1519 for the 64GB version. The 256GB costs $1,729 and lastly the $512GB is a whopping $1,999.
The 6.1-inch iPhone 11 starts at $979 for the 64GB version, the 128GB sports a $1,049 price tag and lastly the 256GB variant has a retail price of $1,189.
Carriers that will likely have the iPhone 11 and iPhone 11 Pro and iPhone 11 Pro Max include Rogers, Cityfone, Eastlink, Freedom Mobile, Koodo, BellMTS, PC Mobile, SaskTel, TbayTel, Telus, Videotron and Virgin Mobile.
We’ll update the post with Canadian carrier-specific details as soon as they are available.
Apple seems to be aiming to regain its smartphone camera crown with the just-announced iPhone 11 Pro and iPhone 11 Pro Max.
While the tech giant was once the dominant force in the smartphone camera space, it has since been eclipsed by rival devices from Samsung, Google and Huawei.
While I definitely need to spend more time with both the iPhone 11 Pro and 11 Pro Max’s new camera features, it seems that could soon change.
Camera upgrades
At the outset, the upgrade to last year’s iPhone XS and XS Max is largely expected and incremental. That said, a few of the Pro’s new features, particularly in the camera department, could be exactly what Apple needs to catch up to competitors like Samsung and Huawei.
For example, there’s now a third rear-facing, wide-angle shooter with a 120-degree field-of-view, a feature we’ve seen in devices from smartphone manufacturers like Samsung and Huawei for a few years now.
When using this feature, you’ll quickly notice that your field-of-view extends beyond the camera’s on-screen controls, indicating that the phone is capable of seeing more than what you currently can.
Focusing is snappier than with the XS series, switching between cameras remains smooth and a new slow-motion video mode with the 12-megapixel front-facing shooter has been added to the phones as well. The cameras can also combine forces to offer a total 4x optical zoom range, which will come in handy when trying to grab shots from further away and also closer up. It’s worth noting that the telephoto camera still offers only a 2x optical zoom over the regular 12-megapixel shooter.
Further, there’s finally a new ‘Night Mode’ that Apple placed significant emphasis on during its keynote. I, unfortunately, didn’t get a chance to test this feature out but on paper, it seems as capable as the Pixel’s ‘Night Sight’ feature.
In general, the iPhone Pro and Pro Max’s camera experience seems significantly improved, but further testing is still required.
Slippery no more
The look of Apple’s flagship iPhones have changed slightly as well — mostly for the better. There’s a new matte finish on the rear of the phone that looks like it will finally solve the grease and dust issues that plague pretty much every modern flagship smartphone.
As an added bonus, the new matte rear also makes the iPhone 11 Pro less slippery, which should, in theory, result in less accidental drops. This gives the phone a more premium feel and while it might sound silly, is probably one of my favourite changes to the iPhone’s design in the last few years.
Then there’s the new sizable, three-lens camera bump. There’s no getting around the fact that it’s big and that it does look offputting at first. Similar to how the iPhone X’s vertical camera bump was initially a little jarring, I think I’ll likely get used to it in time.
There’s also the argument that everyone puts their phone in a case anyway (if you don’t, you should), though for such an expensive device, the iPhone 11 Pro should look great regardless.
Processor power overkill
Other expected improvements have also been added to the 5.8-inch Pro and 6.5-inch Pro Max, including Apple’s new A13 Bionic processor. While the A12 already had a sizable lead on Qualcomm’s Snapdragon 855 in terms of raw power, Apple claims the A13 is even faster.
The new processor is focused on machine learning and battery life, with Apple claiming that the 11 Pro and Pro Max’s batteries should last four and five more hours than the iPhone XS and XS Max respectively. Though I didn’t get to put the phone through its paces, it felt about as snappy as the iPhone XS in my experience.
The iPhone 11 Pro and 11 Pro Max displays looked stunning during my brief time with the phones, but also nearly identical to last year’s screens featured in the iPhone XS and iPhone XS Max. Colours are bright and saturated without pushing the look as far as Samsung does with the display featured in its Galaxy S10 and Note 10.
It’s also worth noting that 3D Touch has finally been killed off, though very few iPhone users are going to miss the feature. Instead, the iPhone XR’s long-press functionality has made its way to the iPhone 11 Pro and 11 Pro Max.
iPhone 11 Pro vs. iPhone 11
This most significant question surrounding the Pro is whether or not its more expensive price tag is worth it when compared to the nearly as good iPhone 11.
My initial impression is that the Pro and Pro Max might be worth the cost depending on what you’re looking for from a smartphone, but more time with Apple’s latest iPhones is definitely necessary before passing final judgment.
It was difficult to test out many of the more interesting new iPhone Pro features Apple revealed during its Fall keynote given how crowded the demo space is, so I’m looking forward to getting my hands on both smartphones and putting them through their paces.
The iPhone 11 Pro starts at $1,379 CAD for 64GB variant, $1,589 for the 256GB model and the 512GB variant costs $1,859. Meanwhile, the 6.5-inch iPhone 11 Pro Max starts at $1519 for the 64GB version. The 256GB costs $1,729 and lastly the $512GB is a whopping $1,999.
We’ll have more on the iPhone 11 Pro and iPhone 11 Pro Max in the coming days.
Just like last year’s iPhone XR, it looks like the iPhone 11 is Apple’s smartphone for everyone.
Similar to its predecessor, the iPhone 11 includes all of the iPhone 11 Pro and Pro Max features most people will care about, but with a friendlier, colourful design.
As expected, there are also a few key shortcomings present in the iPhone 11 the average iPhone user likely won’t even be aware of.
The same power
The iPhone 11 features the same A13 Bionic processor as this year’s Pro models. In my brief time with the smartphone I found the experience just as snappy as the Pro and Pro Max.
The next thing most will likely notice about the phone is the addition of two 12-megapixel cameras on its rear. Rather than a telephoto lens like with the Pro, the iPhone 11 instead features an ultra wide-angle shooter. This means the camera won’t be capable of optical 2x zoom like the Pro or last year’s XS series. Instead, the iPhone 11’s second shooter captures a wider frame, which will likely be more useful for the average iPhone user anyway.
The iPhone 11 is also capable of shooting slow-motion video with its selfie camera, horribly dubbed “slofies” by Apple, as well as ‘Night Mode’ low-light shots. I, unfortunately, didn’t get the chance to test out either of these features, but they’re additional examples of key functionality from the Pro and the Pro Max being featured in the iPhone 11.
Similar to the iPhone 11 Pro and the iPhone 11 Pro Max, the iPhone 11 features a sizable camera bump that matches its more expensive siblings’ design. Why Apple opted to do this is a strange design decision given the iPhone 11 only features two lenses and not three.
That said, given the Phone 11’s bump is transparent, it’s not quite as noticable as you might expect.
I’ll need to spend more time with the iPhone 11’s camera, but at the outset, I’m impressed with what it has to offer in comparison to last years XS and even the new iPhone 11 Pro.
The same LCD display
The fact the phone features the same 6.1-inch LCD display as the XR and not an OLED screen, which will likely be a point of contention for some, particularly Android users.
Yes, $979 CAD is a lot to ask for a smartphone in 2019 that only sports 64GB of storage and features an LCD display, but after briefly looking at the iPhone 11 Pro’s screen beside the iPhone 11, the actual difference in quality was negligible. Just like last year, this is another example of a feature some smartphone enthusiasts might care about but that the average iPhone user probably wouldn’t even notice.
The iPhone 11 is also rated for one additional hour of battery life compared to the iPhone XR, which is in stark contrast to the four to five hours the iPhone 11 Pro and Pro Max get from the A13 chip — at least according to Apple’s claims.
It would have been great if the tech giant made similar improvements to the iPhone 11’s battery life.
Dropping the XR
Finally, there are two new colours this time around, ‘Purple’ and ‘Green.’ Both colours are welcome additions to the lineup, but I’m particularly fond of the faded hue of the purple variant — it just feels different and makes the iPhone 11 stand out.
It’s also worth mentioning that dropping the ‘XR’ name for just ‘iPhone 11’ holds significant weight. In a way, it means as far as Apple is concerned, the iPhone 11 is now the defacto 2019 Apple smartphone. If you want extra features, just like with the iPad line and the iPad Pro, the new iPhone 11 Pro series is what you’re after.
For everyone else, there’s the iPhone 11.
Apple’s iPhone XR was one of my favourite smartphones of last year, so I’m not surprised its successor is just as appealing. We’ll have more on the iPhone 11 in the coming weeks, including a review of the smartphone.
The 6.1-inch iPhone 11 starts at $979 CAD for the 64GB version, the 128GB sports a $1,049 price tag and lastly the 256GB variant has costs $1,189.
I'll give Workflowy a try. I agree with the frustrations with Omni Outliner. I spend more time trying to remove the formatting that seems to accumulate.
During the harshest months of Minnesota’s long, dark winters, when it takes only a few moments for your eyes to start watering and your cheeks to begin stinging, I give up my outdoor hobbies and get creative about exercising indoors. Sometimes that means hopping on a stationary bike. But more and more I find myself turning to an entirely different landscape: virtual reality.
Pulling a ski-mask-like VR headset over your eyes drops you into a virtual world where you can watch movies, play games, and, yes, exercise. Sensors track the location of your hands, body, and head while you smash opponents as Adonis Creed in Creed: Rise to Glory. Other apps let you dance, bike, do yoga, and meditate.
On sites like Reddit, praise abounds on the mental and physical benefits of exercising in virtual reality, often from people who had trouble making other exercise habits last. One of those VR enthusiasts, Robert Long of Maryland, said he used VR games to improve his health and to lose more than 100 pounds, after years of managing pain resulting from two car accidents. There are many factors that contribute to weight loss, but Long’s before and after pictures have generated discussions about health in forums that are usually dedicated to sedentary entertainment.
“Most people never stick to a workout, because it’s not fun, and you are well aware it’s a workout,” says Long. “But VR has the ability to trick the mind into thinking it's a game and not exercise.”
Marialice Kern, chair of the Department of Kinesiology at San Francisco State University, describes VR as an alternative form of exercise. SFSU’s wellness center offers VR fitness classes three times a week, alongside more traditional recreation options like intramural sports and a climbing wall.
“There are certain people who don’t like to exercise, whether that’s hiking or biking. But they love to play video games,” Kern says. “Why not get both?”
While you might remember getting sweaty hopping around a Dance Dance Revolution pad or hula hooping in Wii Fit, a VR headset’s ability to block out the real world makes it even easier to get lost in the flow of exercise disguised as a game. Virtual reality is still niche, but a growing crop of VR games with a fitness element could inspire people to pick up a headset for the first time. Here’s what to know before you get started.
Get a good headset
Whether you want to start exercising at home or you’re looking to augment an existing workout routine with aerobic activity, consider getting a virtual reality headset. As a tech reviewer, I’ve tested dozens of VR headsets and think the $400 Oculus Quest is the first one that could have mass appeal. It costs about the same as a cheaper stationary bike or treadmill.
I like the Quest for workouts because it’s powerful enough for most active games and also cordless, so you won’t trip on any cables. I recommend trying one before buying so that it doesn’t wind up with other abandoned workout gear. Oculus, which is owned by Facebook, publishes a page where you can search for local demo spots. Once you bring the headset home, don’t forget to clear furniture and other obstacles out of the way. You’re going to need at least a few feet of space in every direction.
Find the right games
Long and I share the same obsession: a game called Beat Saber, which tasks you with swinging lightsabers through a series of blocks that are flying through the air. Usually set to electronic music, the levels have the same frenetic, beat-centered activity of the classic console games Dance Dance Revolution and Guitar Hero. The first time I played, it took only a few songs of slashing and ducking before I realized I was sweating—alot.
Kern’s lab measures how much energy people expend playing VR games, for the Virtual Reality Institute of Health and Exercise. The institute likens working out with Beat Saber to playing tennis, and it estimates that players burn 6 to 8 calories per minute. Boxing games, which involve lots of quick jabs and hops, dominate the top categories. They usually burn between 6 and 10 calories a minute.
To pick your first game, find a game description that appeals to you and then check its activity level on the VR Institute’s website. Games that let you play through (and repeat) short levels give you more control over your level and length of activity. I find that whatever game you choose, the key is to commit. It’s easy to pass Beat Saber levels by moving as little as possible. But if you commit to dramatic lightsaber swings and leaping around the room to avoid objects instead of simply leaning, you’ll see faster results (and higher scores).
Fun matters
While boxing games are one of the fastest ways to burn calories, I still turn to Beat Saber for a quick workout. Creed: Rise to Glory just doesn’t hold my attention the way slicing and dicing Beat Saber blocks does.
Matthew Farrow, a health researcher at the University of Bath in the UK, conducted one of several studies that show enjoyment and intensity of exercise increase when someone is playing a game in VR. The game used in Farrow’s study challenged players to cycle along a road while avoiding trucks and police cars. The game also placed a “ghost” version of the player in the game that indicated their previous performance, allowing them to race against themselves. The study found that players worked 9 percent harder, without their motivation decreasing.
“People need to remember to try and make their exercise fun and not a daily chore,” Farrow says. “This is one of the reasons why using virtual-reality games to increase exercise enjoyment is so effective. Games also offer the opportunity to set, monitor, and achieve exercise goals, which helps maintain exercise motivation.”
If you’re just starting to get moving, the key is to pick an exercise that can become a routine. Worry less about how many calories you are burning per minute and more about what you enjoy enough to keep doing.
Add accessories
If you’re serious about getting a full workout in VR, a few accessories can help. Aaron Garcia, an American College of Sports Medicine–certified trainer and coach based in the Los Angeles area, says a fitness tracker (Wirecutter has a few recommendations) can help ensure that you’re increasing your heart rate enough to get an effective workout. It will also give you a more reliable calorie count; Kern noted that her lab sometimes finds the built-in calorie trackers in VR games to be inaccurate.
Additionally, a weight vest or ankle weights can make VR workouts more challenging and introduce a weight-training element. Garcia recommends treating VR as just one element in a well-rounded workout regimen. Pilates, yoga, and weight training are all complementary to VR. Many of his clients join him for weight-lifting training several times a week, and then they exercise on their own at home with tools like VR.
I recommend one more accessory: disposable masks. The VR headset I use has a foam face pad, and it quickly becomes soaked with sweat. It’s gross, especially considering how many friends and co-workers borrow my headset each month. A hygienic mask cuts down on the ick factor dramatically.
Garcia emphasized that no matter how you choose to exercise, you should listen to your body. If you’re sore, stop. Don’t push through an activity to the point of injury just because you’re having fun.
“I think VR is going to be awesome,” Garcia says. “For people who are sedentary, just to start doing something is so awesome. It just depends on how far you want to go with it.”
Some people are awesome at what they do. Other people are just as excellent, but they can write a book about it. Let’s take a close look at what makes the difference. To start, I want you to think about the thing you know that nobody else seems to know, the thing you are passionate … Continued
Herbalife is a multilevel marketing company that sells nutritional and weight loss supplements in 94 countries. After many complaints, the Federal Trade Commission (FTC) investigated the company. In 2016, they described Herbalife as a scam disguised as healthy living; the company was fined $200 million and ordered to restructure its business and issue refunds to 350,000 Herbalife distributors. Herbalife has also been sued in Belgium and Israel. There have been over 50 reports of liver toxicity attributed to Herbalife products in Spain, Israel, Latin America, Switzerland, Iceland, and the US. India is fast becoming the largest growing market for Herbalife products. A recent article “Slimming to the Death” is the first case report of fatal acute liver failure from the Asia-Pacific region. The researchers also found contamination of Herbalife products with heavy metals, toxins, psychotropic substances, and pathogenic bacteria.
Supplements are used by one-third to one-half of adults in the US. They are big business, with sales rising from $9.6 billion in 1994 to $36.7 billion in 2014. 20% of hepatotoxicity cases in the US are attributed to herbal and dietary supplements, the biggest culprit being multi-ingredient nutritional supplements. Identification of the responsible component in multi-ingredient supplements is a challenge due to lack of documentation about source, dosage, and purity; and some products are contaminated or mislabeled.
The LiverTox database on the NIH website categorizes Herbalife products as “a well-established cause of clinically apparent liver injury”. Not everyone would agree with the assessment of “well-established”. A study of 53 cases of suspected Herbalife toxicity evaluated 8 cases where the assumption of toxicity was reportedly confirmed by unintentional re-exposure. They found quality problems, missing data, confounding variables, and uncertainties. They concluded causality was probable in 1 of those 8 cases, unlikely in 4, and excluded in 3.
The Asian patient in question was a 24-year-old Indian woman with hypothyroidism (treated with thyroxine) and a BMI of 32.1 but no other health problems. She had been taking three Herbalife slimming products for 2 months when she developed loss of appetite, jaundice, and transient pruritus. Her liver enzymes had skyrocketed. Her condition worsened and she died while on the waiting list for liver transplant surgery. The researchers were unable to retrieve the actual Herbalife products she had taken but were able to source one product from the same seller that she had purchased her products from; it was a nutritional club functioning without a license, and it was eventually shut down by the government of Kerala. They obtained eight similar Herbalife products from the internet and had them analyzed. High levels of multiple heavy metals were found in all of the products. There were traces of psychotropic recreational agent in 75%, bacterial DNA in 63%, including highly pathogenic species, and other potential liver-toxic agents. The products and contaminants are listed in Table 1 in the article. They pointed out that this is a growing public health concern. They recommended better regulation, with preclinical and clinical scientific studies and post-marketing vigilance.
This was a case report, not definitive proof of causation, but along with other reports of liver toxicity from many countries, it adds to the circumstantial evidence and is clearly a cause for concern. It highlights the possible dangers of dietary supplements and the lack of purity of products on the market. Until dietary supplements are better studied and better regulated, taking products like the ones sold by Herbalife is a gamble and cannot be recommended. There is no credible scientific evidence that they improve health, so the plausible risk must be balanced against zero evidence of benefit. In the absence of proven benefit, even a suspicion of possible risk is unacceptable.
Mobilizon is an effort to create a decentralized events and groups tool designed to free these fiunctionalities from Facebook and enable people to manage their own events without being monetized by the Facebook machine. In the September newsletter they report that go-fund-me fundraising has been successful, and they released wireframes of the proposed product, which is slated for testing this fall. Via Doug Belshaw on Mastodon.
This post points to some of the contradictions in current attitudes to data and the internet. We want open data, especially when open data creates a social good. But we want personal privacy, up to and including the GDPR's "right to be forgotten". But we can't have both. The presumption is generally to err on the side of personal privacy (at least in North America and Europe). But maybe that's wrong? What if "in order to realise the full potential of open (government) data, we probably need to be more relaxed in sharing personal data as well?" As it is argued, "a huge amount of our personal data is not directly created or held by me, as it is data about behavioural patterns." I see you on the street and write "I saw John on the street," is this your data? Some good questions.
The majority of questions require more context than provided to answer well (indeed, true experts will always ask for more context before trying to provide an answer).
Let’s take a typical question, “Which drill bit should I use?”
You can’t really answer this question without much context.
What sized hole do you need? What are you trying to build? What is your budget, risk tolerance, and current level of skill with drilling holes?
The more context a member provides, the better answers they will receive.
Problems begin when well-intentioned members try to provide answers without much context.
Since few repondees hedge the answers (i.e. “if you’re trying to do [x], use this, but if you’re trying to do [y], use this…”), the majority of these answers will be only applicable in specific contexts.
This means you need to focus on getting the context in the question rather than hoping for it in the answers.
Summarize the problem?
What are you trying to achieve?
What have you tried already?
What tools and technology do you use?
etc..
These are all really useful nudges to ensure members are providing the context they need to get the answers they want.
You might not be able to use the same approach as they do, but you can provide the right nudges at the moment members are writing the question or, failing that, when they join the community.
You’re probably not going to get much context in the answers, so work hard to get the context in the question.
We worry about face recognition just as we worried about databases - we worry what happens if they contain bad data and we worry what bad people might do with them
It’s easy to point at China, but there are large grey areas where we don't yet have a clear consensus of what ‘bad’ would actually mean, and how far we worry because this is different rather than just because it’s just new and unfamiliar
Like much of machine learning, face recognition is quickly becoming a commodity tech that many people can and will use to build all sorts of things. ‘AI Ethics’ boards can go a certain way but can’t be a complete solution, and regulation (which will take many forms) will go further. But Chinese companies have their own ethics boards and are already exporting their products.
Way back in the 1970s and early 1980s, the tech industry created a transformative new technology that gave governments and corporations an unprecedented ability to track, analyse and understand all of us. Relational databases meant that for the first time things that had always been theoretically possible on a small scale became practically possible on a massive scale. People worried about this, a lot, and wrote books about it, a lot.
Specifically, we worried about two kinds of problem:
We worried that these databases would contain bad data or bad assumptions, and in particular that they might inadvertently and unconsciously encode the existing prejudices and biases of our societies and fix them into machinery. We worried people would screw up.
And, we worried about people deliberately building and using these systems to do bad things
That is, we worried what would happen if these systems didn’t work and we worried what would happen if they did work.
We’re now having much the same conversation about AI in general (or more properly machine learning) and especially about face recognition, which has only become practical because of machine learning. And, we’re worrying about the same things - we worry what happens if it doesn’t work and we worry what happens if it does work. We’re also, I think, trying to work out how much of this is a new problem, and how much of it we’re worried about, and why we’re worried.
First, ‘when people screw up’.
When good people use bad data
People make mistakes with databases. We’ve probably all heard some variant of the old joke that the tax office has misspelled your name and it’s easier to change your name than to get the mistake fixed. There’s also the not-at-all-a-joke problem that you have the same name as a wanted criminal and the police keep stopping you, or indeed that you have the same name as a suspected terrorist and find yourself on a no-fly list or worse. Meanwhile, this spring a security researcher claimed that he’d registered ‘NULL’ as his custom licence place and now gets hundreds of random misdirected parking tickets.
These kinds of stories capture three distinct issues:
The system might have bad data (the name is misspelled)…
Or have a bug or bad assumption in how it processes data (it can’t handle ‘Null’ as a name, or ‘Scunthorpe’ triggers an obscenity filter)
And, the system is being used by people who don’t have the training, processes, institutional structure or individual empowerment to recognise such a mistake and react appropriately.
Of course, all bureaucratic processes are subject to this set of problems, going back a few thousand years before anyone made the first punch card. Databases gave us a new way to express it on a different scale, and so now does machine learning. But ML brings different kinds of ways to screw up, and these are inherent in how it works.
So: imagine you want a software system that can recognise photos of cats. The old way to do this would be to build logical steps - you’d make something that could detect edges, something that could detect pointed ears, an eye detector, a leg counter and so on… and you’d end up with several hundred steps all bolted together and it would never quite work. Really, this was like trying to make a mechanical horse - perfectly possible in theory, but in practice the complexity was too great. There’s a whole class of computer science problems like this - thing that are easy for us to do but hard or impossible for us to explain how we do. Machine learning changes these from logic problems to statistics problems. Instead of writing down how you recognise a photo of X, you take a hundred thousand examples of X and a hundred thousand examples of not-X and use a statistical engine to generate (‘train’) a model that can tell the difference to a given degree of certainty. Then you give it a photo and it tells you whether it matched X or not-X and by what degree. Instead of telling the computer the rules, the computer works out the rules based on the data and the answers (‘this is X, that is not-X) that you give it.
This works fantastically well for a whole class of problem, including face recognition, but it introduces two areas for error.
First, what exactly is in the training data - in your examples of X and Not-X? Are you sure? What ELSE is in those example sets?
My favourite example of what can go wrong here comes from a project for recognising cancer in photos of skin. The obvious problem is that you might not have an appropriate distribution of samples of skin in different tones. But another problem that can arise is that dermatologists tend to put rulers in the photo of cancer, for scale - so if all the examples of ‘cancer’ have a ruler and all the examples of ‘not-cancer’ do not, that might be a lot more statistically prominent than those small blemishes. You inadvertently built a ruler-recogniser instead of a cancer-recogniser.
The structural thing to understand here is that the system has no understanding of what it’s looking at - it has no concept of skin or cancer or colour or gender or people or even images. It doesn’t know what these things are any more than a washing machine knows what clothes are. It’s just doing a statistical comparison of data sets. So, again - what is your data set? How is it selected? What might be in it that you don’t notice - even if you’re looking? How might different human groups be represented in misleading ways? And what might be in your data that has nothing to do with people and no predictive value, yet affects the result? Are all your ‘healthy’ photos taken under incandescent light and all your ‘unhealthy’ pictures taken under LED light? You might not be able to tell, but the computer will be using that as a signal.
Second, a subtler point - what does ‘match’ mean? The computers and databases that we’re all familiar with generally give ‘yes/no’ answers. Is this licence plate reported stolen? Is this credit card valid? Does it have available balance? Is this flight booking confirmed? How many orders are there for this customer number? But machine learning doesn’t give yes/no answers. It gives ‘maybe’, ‘maybe not’ and ‘probably’ answers. It gives probabilities. So, if your user interface presents a ‘probably’ as a ‘yes’, this can create problems.
You can see both of these issues coming together in a couple of recent publicity stunts: train a face recognition system on mugshots of criminals (and only criminals), and then take a photo of an honest and decent person (normally a politician) and ask if there are any matches, taking care to use a fairly low confidence level, and the system says YES! - and this politician is ‘matched’ against a bank robber.
To a computer scientist, this can look like sabotage - you deliberately use a skewed data set, deliberately set the accuracy too low for the use case and then (mis)represent a probabilistic result as YES WE HAVE A MATCH. You could have run the same exercise with photos of kittens instead of criminals, or indeed photos of cabbages - if you tell the computer ‘find the closest match for this photo of a face amongst these photos of cabbages’, it will say ‘well, this cabbage is the closest.’ You’ve set the system up to fail - like driving a car into a wall and then saying ‘Look! It crashed!’ as though you’ve proved something.
But of course, you have proved something - you’ve proved that cars can be crashed. And these kinds of exercises have value because people hear ‘artificial intelligence’ and think that it’s, well, intelligence - that it’s ‘AI’ and ‘maths’ and a computer and ‘maths can’t be biased’. The maths can’t be biased but the data can be. There’s a lot of value to demonstrating that actually, this technology can be screwed up, just as databases can be screwed up, and they will be. People will build face recognition systems in exactly this way and not understand why they won’t produce reliable results, and then sell those products to small police departments and say ‘it’s AI - it can never be wrong’.
These issues are fundamental to machine learning, and it’s important to repeat that they have nothing specifically to do with data about people. You could build a system that recognises imminent failure in gas turbines and not realise that your sample data has biased it against telemetry from Siemens sensors. Equally, machine learning is hugely powerful - it really can recognise things that computers could never recognise before, with a huge range of extremely valuable uses cases. But, just as we had to understand that databases are very useful but can be ‘wrong’, we also have to understand how this works, both to try to avoid screwing up and to make sure that people understand that the computer could still be wrong. Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but we wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning.
When bad people use good data
So far, I’ve been talking about what happens when a face recognition system (or any machine learning system) gives inaccurate results, but an equal and opposite problem is that people can build a system that gives accurate results and then use those results for something we don’t like. The use of faces is an easy concern to focus on - your face can be seen from across the street without your even knowing, and you can’t change it.
Everyone’s example of what we don’t want is China’s use of every kind of surveillance technology, including face recognition, in its province of Xinjiang, as part of its systematic repression of the ethnic Uyghur Muslim population there. Indeed, a Chinese research paper explicitly aimed at spotting Uyghur faces recently attracted a lot of attention. But of course, these technologies are actually being deployed across the whole of China, if not always on the same scale, and they’re being used for all sorts of things, not all of which are quite so obviously worrying.
You can get a useful window into this in the 600 page IPO prospectus from Megvii, released in August of this year. Megvii is one of the larger companies supplying what it calls ‘smart city IoT’ to various arms of the Chinese government; it says it has 106 Chinese cities as customers, up from 30 in 2016, with 1,500 R&D staff, and $100m of revenue from this business in the first half of 2019. China has turned panopticons into a business.
Megvii doesn’t talk about spotting Uyghurs on the street. It does talk about ‘public safety’ and law enforcement. But it also mentions, for example:
Police being able to identify a lost and confused elderly person who’d forgotten their name and address
Automatically dispatching elevators in a large office building
Checking that tenants in subsidised housing do not illegally sublet their apartments
Building whitelists of people allowed to enter a kindergarten
Banks identifying customers at the cashiers’ desk.
Much like databases today, face recognition will be used for all sorts of things in many parts of societies, including many things that don’t today look like a face recognition use case. Some of these will be a problem, but not all. Which ones? How would we tell?
Today, there are some obvious frameworks that people use in thinking about this:
Is it being done by the state or by a private company?
Is it active or passive - are you knowingly using it (to register at reception, say) or is it happening as soon as you walk through the door, or even walk past the lobby in the street?
If it’s passive, is it being disclosed? If it’s active, do you have a choice?
Is it being linked to a real-world identity or just used as an anonymous ID (for example, to generate statistics on flow through a transit system)?
And, is this being done to give me some utility, or purely for someone else’s benefit?
Then of course, you get to questions that are really about databases, not face recognition per se: where are you storing it, who has access, and can I demand to see it or demand you delete it?
Hence, I think most people are comfortable with a machine at Customs checking your face against the photo in the passport and the photo on file, and recording that. We might be comfortable with our bank using face recognition as well. This is explicit, it has a clear reason and it’s being done by an organisation that you recognise has a valid reason to do this. Equally, we accept that our mobile phone company knows where we are, and that our bank knows how much money we have, because that’s how they work. We probably wouldn’t accept it the other way around - my phone company doesn’t get to know my salary. Different entities have permission for different things. I trust the supermarket with my children’s lives but I wouldn’t trust it with a streaming music service.
At the other end of the spectrum, imagine a property developer that uses face recognition to tag and track everyone walking down a shopping street, which shops they go into, what products they look at, pick up and try on, and then links that to the points of sale and the credit card. I think most people would be pretty uncomfortable with this - it’s passive, it’s being done by a private company, you might not even know it was happening, and it’s not for your benefit. It’s a non-consensual intrusion into your privacy. They don’t have permission.
But on the other hand, would this tracking be OK if it was anonymous - if it was never explicitly linked to the credit card and to a human name, and only used to analyse footfall? What if it used clothes and gait to track people around a mall, instead of faces? What if a public transit authority uses anonymised faces to get metrics around typical journeys through the system? And why exactly is this different to what retailers already do with credit cards (which can be linked to your identity at purchase) and transit authorities do with tickets and smart cards (which often are)? Perhaps it’s not quite so clear what we approve of.
Retailers tracking their customers does make lots of people unhappy on principle, even without any faces being involved (and even if they’ve been doing it for decades), but how about a very obvious government, public safety use case - recognising wanted criminals?
We’re all (I think) comfortable with the idea of mugshots and ‘Wanted’ posters. We understand that the police put them up in their office, and maybe have some on the dashboard of their patrol car. In parallel, we have a pretty wide deployment today of licence plate recognition cameras for law enforcement (or just tolls). But what if a police patrol car has a bank of cameras that scan every face within a hundred yards against a national database of outstanding warrants? What if the Amber Alert system tells every autonomous car in the city to scan both passing cars and passing faces for the target? (Presume in all of these cases that we really are looking for actual criminals and not protesters, or Uyghurs.) I’m not sure how much consensus there would be about this. Do people trust the police?
You could argue, say, that it would not be OK for the police to scan ‘all’ of the faces all of the time (imagine if the NYPD scanned every face entering every subway station in New York), but that it would be OK to scan historic footage for one particular face. That sounds different, but why? What’s the repeatable chain of logic? This reminds me of the US courts’ decision that limits how the police can put a GPS tracker on a suspect’s car - they have to follow them around manually, the old-fashioned way. Is it that we don’t want it, or that we don’t want it to be too easy, or too automated? At the extreme, the US firearms agency is banned from storing gun records in a searchable database - everything has to be analogue, and searched by hand. There’s something about the automation itself that we don’t always like - when something that always been theoretically possible on a small scale becomes practically possible on a massive scale.
Part of the experience of databases, though, was that some things create discomfort only because they’re new and unfamiliar, and face recognition is the same. Part of the ambivalence, for any given use case, is the novelty, and that may settle and resettle:
This might be a genuinely new and bad thing that we don’t like at all
Or, it may be new and we decide we don’t care
We may decide that it’s just a new expression of an old thing we don’t worry about
It may be that this was indeed being done before, but somehow doing it with your face makes it different, or just makes us more aware that it’s being done at all.
All of this discussion, really, is about perception, culture and politics, not technology, and while we can mostly agree on the extremes there’s a very large grey area in the middle where reasonable people will disagree. This will probably also be different in different places - a good illustration of this is in attitudes to compulsory national identity cards. The UK doesn’t have one and has always refused to have one, with most people seeing the very idea as a fundamental breach of civil liberties. France, the land of ‘liberté’, has them and doesn’t worry about it (but the French census does not collect ethnicity because the Nazis used this to round up Jews during the occupation). The US doesn’t have them in theory, but is arguably introducing them by the back door. And Germany has them, despite very strong objections to other intrusions by the state for obvious historical reasons.
There’s not necessarily any right answer here and no way to get to one through any analytic process - this is a social, cultural and political question, with all sorts of unpredictable outcomes. The US bans a gun database, and yet, the US also has a company called “PatronScan’ that scans your driving licence against a private blacklist of 38,000 people (another database) shared across over 600 bars and nighclubs. Meanwhile, many state DMVs sell personal information to private companies. Imagine sitting down in 1980 and building a decision matrix to predict that.
Ethics and regulation
The tech industry’s most visible initial response to these issues has been to create ethics boards of various kinds at individual companies, and to create codes of conduct across the industry for individual engineers, researchers and companies to sign up to. The idea of both of these:
To promise not to create things with ‘bad data’ (in the broadest sense)
To promise not to build ‘bad things’, and in the case of ethics boards to have a process for deciding what counts as a bad thing.
This is necessary, but I think insufficient.
First, promising that you won’t build a product that produces inaccurate results seems to me rather like writing a promise to yourself not to screw up. No-one plans to screw up. You can make lists of specific kinds of screw-ups that you will try to avoid, and you’ll make progress in some of these, but that won’t stop it happening. You also won’t stop other people doing it.
Going back to databases, a friend of mine, Steve Cheney, recently blogged about being stopped by the police and handcuffed because Hertz had mistakenly reported his hired car as stolen. This wasn’t a machine learning screw-up - it was a screw-up of 40 year old technology. We’ve been talking about how you can make mistakes in databases for longer than most database engineers have been alive, and yet it still happens. The important thing here was that the police officer who pulled Steve over understood the concept of a database being wrong and had the common sense - and empowerment - to check.
That comes back to the face recognition stunts I mentioned earlier - you can promise not to make mistakes, but it’s probably more valuable to publicise the idea that mistakes will happen - that you can’t just presume the computer must be right. We should publicise this to the engineers at a third-tier outsourcer that’s bodging together a ‘spot shoplifter faces’ system, but we should also publicise it to that police officer, and to the lawyer and judge. After all, those mistakes will carry on happening, in every computer system, for as long as humans are allow to touch them.
Second, it’s all well and good for people at any given company to decide that a particular use of face recognition (or any kind of machine learning project) is evil, and that they won’t build it, but ‘evil’ is often a matter of opinion, and as I’ve discussed above there are many cases where reasonable people will disagree about whether we want something to exist or not. Megvii has an ethics board too, and that ethics board has signed off on ‘smart city IoT’.
Moreover, as Megvii and many other examples show, this technology is increasingly a commodity. The cutting edge work is still limited to a relatively small number of companies and institutions, but ‘face recognition’ is now freely available to any software company to build with. You yourself can decide that you don’t want to build X or Y, but that really has no bearing on whether it will get built. So, is the objective to prevent your company from creating this, or to prevent this thing from being created and used at all?
This of course takes us to the other strand of reaction, which is the push for binding regulation of face recognition, at every level of government from individual cities to the EU. These entities do of course have the power of compulsion - they still can’t stop you from screwing up, but they can mandate auditing processes to catch mistakes and remedies or penalties if they happen, and (for example) require the right to see or delete ‘your’ data, and they can also ban or control particular use cases.
The challenge here, I think, is to work out the right level of abstraction. When Bernie Madoff’s Ponzi scheme imploded, we didn’t say that Excel needed tighter regulation or that his landlord should have spotted what he was doing - the right layer to intervene was within financial services. Equally, we regulate financial services, but mortgages, credit cards, the stock market and retail banks’ capital requirements are all handled very separately. A law that tries to regulate using your face to unlock your phone, or turn yourself into a kitten, and also a system for spotting loyalty-card holders in a supermarket, and to determine where the police can use cameras and how they can store data, is unlikely to be very effective.
Finally, there is a tendency to frame this conversation in terms of the Chinese government and the US constitution. But no-one outside the USA knows or cares what the US constitution says, and Megvii already provides its ‘smart city IoT’ products to customers in 15 countries outside China. The really interesting question here, which I think goes far beyond face recognition to many other parts of the the internet, is the degree to which on one hand what one might call the ‘EU Model’ of privacy and regulation of tech spreads, and indeed (as with GDPR) is imposed on US companies, and on the other the degree to which the Chinese model spreads to places that find it more appealing than either the EU or the US models.
We were out at the airport yesterday evening. A niece was on her way through, but had a couple of hours layover there while she changed planes. We arranged to meet her in the terminal for dinner.
This gave me a bit more time to look at the art which is installed at the entrance to the domestic arrivals area, in the basement. To find out more about these pieces go to the YVR webpage
The pictures below have all been posted on flickr and can be found there just by clicking on the image.
As part of the Techfestival last week, the Copenhagen 150, which this time included me, came together to write a pledge for individual technologists and makers to commit their professional lives to. A bit like a Hippocratic oath, but for creators of all tech. Following up on the Copenhagen Letter, which was a signal, a letter of intent, and the Copenhagen Catalog which provide ‘white patterns’ for tech makers, this years Tech Pledge makes it even more personal.
to only help create things I would want my loved ones to use.
to pause to consider all consequences of my work, intended as well as unintended.
to invite and act on criticism, even when it is painful.
to ask for help when I am uncertain if my work serves my community.
to always put humans before business, and to stand up against pressure to do otherwise, even at my own risk.
to never tolerate design for addiction, deception or control.
to help others understand and discuss the power and challenges of technology.
to participate in the democratic process of regulating technology, even though it is difficult.
to fight for democracy and human rights, and to improve the institutions that protect them.
to work towards a more equal, inclusive and sustainable future for us all, following the United Nations global goals.
to always take care of what I start, and to fix what we break.
I signed the pledge. I hope you will do to. If you have questions about what this means in practical ways, I’m happy to help you translate it to your practice. A first step likely is figuring out which questions to ask of yourself at the start of something new. In the coming days I plan to blog more from my notes on Techfestival and those postings will say more about various aspects of this. You are also still welcome to sign the Copenhagen Letter, as well as individual elements of the Copenhagen Catalog.
Millions of plastic bottles are purchased every day around the world. What does that look like? Simon Scarr and Marco Hernandez for Reuters virtually piled the estimated number of bottles purchased in an hour, day, month, and up to the past 10 years. They used the Eiffel Tower for scale. The above is just one day’s worth.
Rayna@QuenchWines
@DuaneStorey Again, talking in circles here. The word is related to a very specific behaviour, and if it didn’t need to exist, it wouldn’t.