Shared posts

23 Feb 04:09

Spend 10% – 20% Of Your Resources On Monitoring And Evaluation

by Richard Millington

A good rule of thumb is to allocate 10% to 20% of your budget on measuring, evaluating, and communicating the impact of your community.

If your total budget (platform, staff, etc…) is $500k a year, allocating $50k a year to build a world-class system for analysing the community’s results and communicating the next steps isn’t just common sense…it’s a bargain!

That might break down to:

  • $10k – Software.
  • $30k – Consultancy to setup measurement, analysis, and reporting systems.
  • $10k – Misc budget. Designing reports, bringing stakeholders together in-person to report the results.

Spending 1 day a week on analysis and reporting might, in turn, break down to:

  • Maintaining the systems for collecting and updating community data.
  • Collecting data and updating community reports.
  • Developing recommendations based on the data collected.
  • Communicating the results to the steering group (both individually to key stakeholders and as a collective).
  • Update the strategy based upon the results.
  • Designing and publishing community stories.

I bet if you did just half of these, you would have far more support for your community and knowledge about your community than you do today.

23 Feb 04:07

Zersetzung Brexit

by Chris Grey
During the Cold War, the Stasi perfected techniques of psychological warfare known as Zersetzung, sometimes translated as ‘disintegration’. Targeted at individuals and dissident groups, it involved “a systematic degradation of reputation, image, and prestige on the basis of true, verifiable and discrediting information together with untrue, credible, irrefutable, and thus also discrediting information; a systematic engineering of social and professional failures to undermine the self-confidence of individuals; ... engendering of doubts regarding future prospects; engendering of mistrust and mutual suspicion within groups …”.

I’m extremely wary of invoking comparisons between Brexit and totalitarianism, because they almost invariably exaggerate what is happening with Brexit, whilst insultingly and irresponsibly downplaying the horrors of totalitarianism. Even so, it’s not entirely fanciful to draw at least metaphorical parallels between Zersetzung and the gaslighting which characterises the government’s approach to Brexit. In particular, there is a comparison in the way that it is becoming almost impossible to separate out what is true from what is false, what is intended from what is accidental, what is incompetent from what is malevolent.

The Irish Sea border

Take the remarks made last Friday by the new Northern Ireland Secretary, Brandon Lewis, saying that there will be no Irish Sea border, despite the fact that this is precisely what the government signed up to in the Withdrawal Agreement.

There are multiple ways of interpreting what Lewis said. Perhaps he is simply ignorant of the facts. That isn’t altogether unbelievable. Yet he claimed to be saying what the government’s policy is, and, indeed, he is saying exactly what Boris Johnson has said. Perhaps he, and Johnson, are lying. That too, doesn’t exactly strain credulity. Yet, if so, to what end? If they are taken at their word, then how can preparations be made for the arrangements which need to be in place in just ten months’ time. As Jess Sergeant of the Institute for Government writes, “until the prime minister acknowledges the extent – or even the existence – of new checks, this work cannot begin in earnest”.

Or perhaps Lewis was engaging in the kind of linguistic sleight of hand referred to in my previous post, and was glossing over the truth that there will be a border for goods by reference to the fact that there will be no border for people, which is also true? Perhaps he was trying to assuage unionist sentiment in Northern Ireland? Perhaps he was trying to pander to Brexiters in his own party and the country? Perhaps he actually means that the government are going to renege on the Withdrawal Agreement?

No one really knows, and that matters not least because of its impact on negotiations with the EU. There, there is growing alarm not so much because of Brandon Lewis’s comments but because of Boris Johnson’s, for they betoken sharply divergent understandings of what the Northern Ireland Protocol in the Withdrawal Agreement means. That in turn calls into question the possibility of achieving a deal on future trade and other terms, and at the very least erodes trust in those negotiations, making it more likely that the EU will want watertight guarantees on everything. It also, of course, has profound potential consequences for the people of Northern Ireland.

Again, there’s no way of knowing what Johnson is really up to. Perhaps he wants to collapse the talks and never had any intention of honouring the Northern Ireland Protocol. For what it is worth I think the truth is more prosaic. It seems more likely to me that Johnson, with his usual arrogance, ambition, and sloppiness simply had no real idea what he was agreeing to and didn’t care. His MPs, including the ERG, and indeed some Labour MPs, just voted it through (I am referring to the pre-election vote on Johnson’s revised Withdrawal Agreement) without paying much attention to what it meant and very possibly without even reading it, and, then, when it came to the election, he proclaimed that he had an ‘oven ready deal’.

The Level Playing Field

Exactly the same thing seems to have happened with the Level Playing Field (LPF) conditions. Having agreed to these at least as the intended direction of travel in the Political Declaration (PD, paragraph 77 on p.14-15 of the link), there is now a concerted attempt to disown them. This began earlier this year and was most recently and forcefully articulated by David Frost in a speech in Brussels last Monday.

Frost, the government’s lead negotiator for the talks with the EU, argued that the UK wants a Canada-style trade deal, and bemoaned the fact that the EU had supposedly previously offered this but was now “experiencing some doubts” (implying the requirement of substantial LPF conditions). These conditions, he argued, would mean that Britain was not an independent country and that to comply with them would negate the very purpose of Brexit and threaten a crisis of democracy.

Again, there are multiple possible interpretations of this. It could, despite his denials that this was so, be some kind of negotiating ploy (if so, it just makes the UK look untrustworthy). It could be a message to the Brexiters that Frost is ‘one of them’, and notably he spent much time burnishing his Eurosceptic credentials, thus hoping to avoid the fate of his predecessor Olly Robbins (good luck with that, as the Ultras will turn on him if he does any kind of deal).

It could have been an attempt to get the EU to understand the constraints of UK politics (in which case, think again – the days when the EU was willing to bend over backwards to accommodate those is long gone). It could mean that the government is now determined to leave without a trade deal, or one of the most minimal sort. It could be that the government didn’t understand what it had signed up to. Or it could mean that the government honestly believes that a good deal can be done without agreeing to LPF.

The substance of Frost’s argument was nugatory. It rests on a wholly naïve notion of what ‘independence’ means, namely freedom from any form of regulation that does not derive solely from UK law. But all sorts of international agreements, including trade agreements, involve some form of dilution of independence in this very crude sense. Many sorts of regulation, including ‘WTO rules’, involve adherence to decisions made on a transnational basis. Individual countries can influence them, but they can’t fully control them. For that matter, the EU itself often adopts rules set by other bodies, for example as regards automotive standards, and the UK is very likely to do the same.

With Brexit, Britain has chosen to lose all influence as regards EU rules. But it can certainly exert ‘independence’ in the sense Frost means simply by not agreeing substantially to the EU’s terms for a future deal – if it’s willing to accept the consequences, economically and politically. Independence, for countries as for individuals, is not just about the freedom to make your own choices, but also taking responsibility for what those choices mean. To be fair to Frost, he seemed to accept that, although only by dismissing (without evidence) almost all the economic forecasts, including those of the government itself, so as to conclude that these consequences will be largely benign.

The war of the slides

Frost’s position was echoed later in the week by the bizarre ‘war of the slides’ (£) which began when the Prime Minister’s press office released a rather whiny tweet showing the Barnier staircase with its indication that a Canada deal was an option for the UK, consistent with the latter’s red lines. Apart from being slightly embarrassing (as if, rather than roaring, the British Lion was grizzling because he’d been promised a trip to the circus), everything about this showed what Peter Foster, Europe Editor of the Daily Telegraph, called “brazen disingenuousness”.

Why? (This list overlaps with but isn’t the same as Foster’s reasons). Because from the outset the EU position has been that LPF provisions would be necessary by virtue of the size, proximity and interconnectedness of the UK and EU economies (a point underlined by its own contribution to the war of the slides). Because, again, this was what Johnson agreed to in the Political Declaration. Because what has changed is that the UK has now introduced a new red line, in addition to those which Theresa May had set and which were incorporated into the staircase. And because Brexiters have for years been saying that they wanted a Canada +++ or Super-Canada deal, and in that sense had always wanted more than to be treated ‘just like Canada’, but either did not understand or concealed the implications of that.

Once again, multiple interpretations are possible. Does the government think that this stance will make the EU suddenly drop LPF? If so, that is not just an absurd hope but also has had the opposite effect by making the EU even more suspicious of Johnson’s mendacity. Is it that the government genuinely still fails to understand what Brexit means, and what it has signed up to in the Political Declaration? Is it a domestic signal to Brexiters, in preparation for quietly accepting EU demands? Or Is it preparation for leaving without a deal and noisily setting the EU up to take the blame?

If it is the latter (what I have been calling no deal 2.0, because it is different to no Withdrawal Agreement and, also, refers to more than trade but, alas, it hasn’t caught on) then the real democratic crisis is this. It would not be remotely what was promised during the Referendum (nor, for that matter, would a bare-bones or even Canada-style deal). And it would certainly not be what was promised at the general election. For that was fought on the basis of what Johnson had agreed with the EU, including the Northern Ireland Protocol in the Withdrawal Agreement and including LPF in the Political Declaration.

It’s true that the Political Declaration is not legally binding with respect to the EU, but it is part of the basis on which Johnson was elected in the UK. This was part of the ‘oven ready deal’ he offered. Going back on what was agreed may also face legal hurdles in the UK, although these probably aren’t insuperable given the size of Johnson’s majority. At all events doing so would do massive damage to Britain’s international credibility, just as it was seeking to make new deals with other countries. However, it certainly can’t be assumed that no deal 2.0 is now inevitable, and the respected trade expert Dmitry Grozoubinski has outlined the space in which a deal could, in principle, occur.

What’s going on?

The truth is that no one actually knows what Johnson’s government is planning, or even if it has a plan. At every new development, whichever direction it takes, there are always some who confidently say that ‘this was the plan all along’. Some are as certain that Brexit is a well worked out neo-liberal plot as others are that the EU and remainers are part of a nefarious neo-liberal conspiracy. Some rush breathlessly from their latest ‘insider’ (aka PR) briefing to announce with X% confidence what the latest central scenario is, whilst others ‘have a friend’ who is very high up and has revealed all but of course the ‘mainstream media’ won’t report it. Some are convinced that the hardest of Brexits is inevitable, others equally sure that Brexit in name only (BRINO) is the only possible outcome.

Having now alienated probably half of the readers of this blog, I’ll see if I can do the same for the other half. I certainly have no more idea than anyone else as to what will happen. But in terms of the underlying process, my thoughts for what they’re worth are these. On the one hand, what we are witnessing is to a degree intentional. It’s now a cliché to talk about ‘the alt-right playbook’ and its connections with the kind of psychological warfare techniques with which I began this post. But it doesn’t need any great conspiracy theory or felt-penned sociograms to see how these techniques have spread or indeed to see how they have developed from more familiar approaches to media management.

The generation of constant uncertainty, the endless revisions of even very recent history, the half-truths and lies, the divisiveness and the distractions are all plain to see and they are intended to have the effect of confusing and manipulating the public. It is disturbing, destabilising, and exhausting to be exposed to it. That is partly what I meant by the comparison with Zersetzung and why several serious analysts are describing these developments as Orwellian. It is also the reason why, as I’ve often written on this blog, it is important to keep attempting to hold on to recorded facts and rationality as the only antidote to these dangerous and shameless tactics.

But, on the other hand, if the implication is that these tactics are being used to disguise the government’s ‘true agenda’ for Brexit then I am not at all convinced. It’s important not to over-estimate the competence of our leaders and their advisers, or the coherence of what they do. In particular, over the last four years what has been astounding is that almost all Brexiters have virtually no idea what they really want or how to achieve it, make constant errors about quite basic facts, and have made endless unnecessary mistakes in their attempts to deliver it. Whilst the latter are now blamed upon ‘Theresa the remainer’, it shouldn’t be forgotten that originally she was their idol nor that she gave government roles to Brexiters like Johnson, Fox, Davis, Grayling, Patel and Truss.

Brexiters haven’t suddenly become competent

So with the Brexiters now totally in control of government, it is as to easy to believe that they think that, say, by playing hardball in threatening no trade deal they will achieve a deal scarcely any different to EU membership as to believe that they are completely uninterested in a deal and have always wanted to leave without one. Or that they are divided amongst themselves on this and everything else and the outcome will depend upon which view wins out. Or that the outcome will be entirely accidental, born of complete incompetence.

That could arise if, for example, having agreed to something he did not bother to understand, Johnson now mulishly doubles down on his mistakes. At all events, Brexiters do not suddenly become competent or well-informed just because they have red boxes to open, as David Davis’s career attests. They may be inflicting a swirl of confusions, lies, half-truths and disinformation upon the country, but they are also themselves lost within that same miasmic fog.

Of course, on either account the outlook for the country is not at all good. The government seems to have simply no idea what it is doing and, especially since last week’s reshuffle, to be populated by subservient nonentities in the grip of group think. If so, there is every chance that it will unintentionally lead us to disaster. Alternatively, it knows exactly what it is doing, the cluelessness is a smokescreen, and disaster is actually the plan. It’s difficult to know which is the more alarming prospect.

From this point of view, the metaphorical comparison with Zersetzung is not, as might be thought, so much to say that psychological warfare is being unleashed upon individuals and groups in order to effect their disintegration. Rather, with Brexit we have a country unleashing a kind of Zersetzung upon itself.

23 Feb 04:07

This article, found via Alper, about the work o...

by Ton Zijlstra

This article, found via Alper, about the work on ethics by MIT highlights various worrisome perspectives. Whether it is in avoiding regulation, the lobbying, its leadership, or the focus on a specific take that supports a current bigtech development but ignores the wider context. The latter, when it is about (G’s) self driving cars is a glaring example. Because of it I’ve been sceptical of MIT’s work on ethics for at least five years.

23 Feb 04:04

The pronoun conundrum: “we” vs. “you” in business advice

by Josh Bernoff

Business books are full of advice. They’re also full of pronouns — and pronoun-driven confusion — around when to use “we” and when to use “you.” Start with this: you should write directly to the reader. Compare the following two alternatives: Marketers must spend less time on SEO and more time on the creation of … Continued

The post The pronoun conundrum: “we” vs. “you” in business advice appeared first on without bullshit.

23 Feb 04:04

Canceled flights due to coronavirus

by Nathan Yau

With an animated side-by-side map, The New York Times shows canceled flights in efforts to slow down the spread of the coronavirus. The left map represents 12,814 flights within China on January 23. The right map shows 1,662 on February 13. Keep scrolling to see changes for flights leaving China to other countries.

Tags: China, coronavirus, flights, New York Times

23 Feb 04:04

This Week in Glean: A Distributed Team Echoes Distributed Workflow

by chuttenc

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

Last Week: Extending Glean: build re-usable types for new use-cases by Alessio


I was recently struck by a realization that the position of our data org’s team members around the globe mimics the path that data flows through the Glean Ecosystem.

Glean Data takes this five-fold path (corresponding to five teams):

  1. Data is collected in a client using the Glean SDK (Glean Team)
  2. Data is transmitted to the Structured Ingestion pipeline (Data Platform)
  3. Data is stored and maintained in our infrastructure (Data Operations)
  4. Data is presented in our tools (Data Tools)
  5. Data is analyzed and reported on (Data Science)

The geographical midpoint of the Glean Team is about halfway across the north atlantic. For Data Platform it’s on the continental US, anchored by three members in the midwestern US. Data Ops is further West still, with four members in the Pacific timezone and no Europeans. Data Tools breaks the trend by being a bit further East, with fewer westcoasters. Data Science (for Firefox) is centred farther west still, with only two members East of the Rocky Mountains.

Or, graphically:

gleanEcosystemTeamCentres
(approximate) Team Geocentres

Given the rotation of the Earth, the sun rises first on the Glean Team and the data collected by the Glean SDK. Then the data and the sun move West to the Data Platform where it is ingested. Data Tools gets the data from the Platform as morning breaks over Detroit. Data Operations keeps it all running from the midwest. And finally, the West Coast Centre of Firefox Data Science Excellence greets the data from a mountaintop, to try and make sense of it all.

(( Lying orthogonal to the data organization is the secret Sixth Glean Data “Team”: Data Stewardship. They ensure all Glean Data is collected in accordance with Mozilla’s Privacy Promise. The sun never sets on the Stewardship’s global coverage, and it’s a volunteer effort supplied from eight teams (and growing!), so I’ve omitted them from this narrative. ))

Bad metaphors about sunlight aside, I wonder whether this is random or whether this is some sort of emergent behaviour.

Conway’s Law suggests that our system architecture will tend to reflect our orgchart (well, the law is a bit more explicit about “communication structure” independent of organizational structure, but in the data org’s case they’re pretty close). Maybe this is a specific example of that: data architecture as a reflection of orgchart geography.

Or perhaps five dots on a globe that are only mostly in some order is too weak of a coincidence to even bear thinking about? Nah, where’s the blog post in that thinking…

If it’s emergent, it then becomes interesting to consider the “chicken and egg” point of view: did the organization beget the system or the system beget the organization? When I joined Mozilla some of these teams didn’t exist. Some of them only kinda existed within other parts of the company. So is the situation we’re in today a formalization by us of a structure that mirrors the system we’re building, or did we build the system in this way because of the structure we already had?

mindblown.gif

:chutten

23 Feb 04:03

Australia: Right-wing Irrelevance in a Time of Trauma

by Gordon Price

Gord Price will be in Australia for the next month, Instagramming and podcasting his way across the country.  Follow his coverage here and on Instagram (gordonpriceyvr), as well as PriceTalks podcast when interviews are occasionally posted.

 

I’ve been following the news through the Sydney Morning Herald prior to the trip, and thought this was a particularly revealing item:

A conservative activist group – which bills itself as the right-wing version of GetUp – will target primary school children with a series of new resources designed to counter the “climate alarmist narrative” it says is being pushed in classrooms and the media.

Advance Australia’s national director Liz Storer said the resource packs being developed will be sent to schools, parents and grandparents, and could be used in the classroom or at home. The resources will say human-induced climate change “isn’t true” and “there’s a lot more to the story”.

It’s not so much that this initiative is new or unexpected.  The ‘counter-narrative’ strategy has been remarkably effective at seeding sufficient doubt to establish ‘both-sides-ism’ in media coverage and, importantly, delay any unequivocal action by government to address climate change.  Like the Harper Strategy described below, it doesn’t require outright denial, and hence doesn’t seem overly wingnut to those looking for the ‘moderate’ response to the issue.  Including those who decide what should be taught in schools.

Hence the response to this proposal by Advance Australia is what makes the story important:

But the New South Wales and Victorian governments have already indicated the materials in question would very likely be banned in public schools as they “would not be deemed objective”. …

The NSW Department of Education said Advance Australia’s resources would not be allowed in the state’s public schools because they would fall foul of the government’s policies and guidelines.

“This includes the Controversial Issues in Schools policy which says that schools are neutral places for rational discourse and objective study, and discussions should not advance the interest of any particular group,” a department spokesman said.

“Under the Controversial Issues in Schools policy these materials from Advance Australia would not be deemed objective and therefore not permitted to be used in NSW public schools.”

Likewise, the politicians in government feel comfortable in outright rejection:

Victoria’s Labor Education Minister James Merlino said he believed most principals in his state “will put this rubbish where it belongs – in the bin”.

“This organisation is a front for a group of ill-informed climate change deniers,” he said. “Our schools should not be used as a tool for a group like this to peddle their political agenda.”

A Labor minister of course.  But my guess is that the Liberals and even the Nationals will not run to Advance’s cause, much less say they would put their material in the schools.

And here’s why: doubt and denial can be planted and nourished when climate change is not catastrophic and unfolds slowly.  When catastrophic events do occur – fires, floods, droughts, hurricanes – and go beyond one-off extremes of weather, when the frequency of them becomes a pattern, and the pattern is consistent with prediction, denialists become irrelevant.  They have nothing to say in response to the reality of an existential threat – because that reality wasn’t supposed to happen.

The public and decision-makers then turn to those who have something to say about reality, and look to those who have a strategy of response.

That is where Australia is now, I believe.  And Ill be looking to see how it is playing out in real time with those engaged in “the reality that doesn’t go away.”

 

 

23 Feb 04:03

Decolonisation isn’t as simple as plenty of people suggest

Sunny Dhillon, Wonkhe, Feb 21, 2020
Icon

I agree with this: "Real decolonisation would involve us... to lose a number of Eurocentric and liberal privileges. It is unlikely that the contemporary neoliberal university is either suited to, or indeed interested in, such a task – it is too implicated in the legacy of colonisation to do so. To echo Andrews, what is required is using university forums to encourage community organisation and education in settings outside of the establishment." This is true not only for decolonization initiatives (which IMO would need to be reasonably well defined, because I do not want to see it resulting in one group of poor people being pitted against another) but for egalitarian initiatives in general.

Web: [Direct Link] [This Post]
23 Feb 04:02

How Hands-On Projects Can Deepen Math Learning for Teens

Kara Newhouse, Mind/Shift, Feb 21, 2020
Icon

I really like this point that's raised in the article: "Avoid group grades. Many adults and kids can recall working on a class project where some people pulled more of the weight than others. Although project-based learning often involves teamwork, the SLA math teachers said they do not assign group grades." I've been on both sides of that equation - the kid who did all the work, and the kid who did none (though to be fair to myself, I refused to accept a full share of credit when I did none of the work, hence pushing up my partner's score).

 

Web: [Direct Link] [This Post]
23 Feb 04:02

Parking Bad

by peter@rukavina.net (Peter Rukavina)

Parking pro-tip: if your car, truck or van is covering the sidewalk, you’re doing something wrong.

Imagine rolling down this Prince Street sidewalk on this bitterly cold day only to encounter this: your only options are to roll out into traffic, or to backtrack to Grafton Street, cross, and take the other sidewalk down.

This is a problem that’s 100% preventable by exercising thoughtful parking. Spread the word.

Cars and trucks blocking the Prince Street sidewalk.

23 Feb 04:02

Lila’s first bicycle

by tychay

I took off Tuesday from work to hang with my cousin, Peter.

After the recent death of his father, he was inspired to take up bicycling again for his health. I helped him buy a gravel bicycle. He also purchased a bicycle for his daughter, Lila.

We spent the day building and testing both bicycles and talking about a lot of stuff. Because we grew up six years apart on different coasts, this was the most time I think we’ve ever talked.

That morning Peter told Lila, “Uncle Terry’s going to come over while you’re in school and help me build your bicycle!” She asked if I’d still be there when she got back.

Well, even though I had insomnia the night before and gotten very little sleep, I had to stay. It was well worth it…

Lila's first bicycle!

Karen said Lila was telling everyone in her class how excited she is that she was getting a bicycle. Lila doesn’t know how to ride a bicycle just yet, but she already sticker-bombed her bike like a boss.

I wish them a long lifetime of happy trails. Hopefully, I’ll get to go on a couple rides with them.

23 Feb 04:01

Recommended on Medium: Tools for home cooking

I do write about cooking here, but this is a long post comparing programming to cooking. If you haven’t read it already, go read Robin Sloan’s “An app can be a home cooked meal”. Sloan wrote an iOS app for himself, his sister, and their parents, and considers himself a “home cook” equivalent programmer.

Ton and Roland beat me to a write up with their thoughts on the post. I knew as soon as I read the article that they’d both have thoughts on it.

Roland picked out all of the juicy quotes. I’ll plus one the first two:

In a better world, I would have built this in a day using some kind of modern, flexible HyperCard for iOS, exporting a sturdy, standalone app that did exactly what I wanted and nothing else. — Robin Sloan

Search for a recent tweet of mine about HyperCard, I found another thread from 2018 that points at HyperCard plus IPFS plus blockchain that is roughly where I’m pointed at these days. Including mobile devices as a creation and deployment platform for apps. And upthread has a quote from @patrickc:

End user computing is becoming less a bicycle and more a monorail for the mind. — Patrick Collison

This is a call back to Steve Jobs’ computer as bicycle for the mind.

In our actual world, I built it in about a week, and roughly half of that time was spent wrestling with different kinds of code-signing and identity provisioning and I don’t even know what. I waved some incense and threw some stones and the gods of Xcode allowed me to pass — Robin Sloan

The developer experience of building software locally, on your desktop or laptop, is very good in 2020. Live reloading, a mobile emulator, debugging tools, your code editor checking things as you type. But as soon as you get to deployment — to launching this thing into the world without turning your screen to show someone else — is an incredibly difficult learning curve, immediately.

Roland’s part 2 commentary finds Eugene Wallingford’s Programming feels like home:

Sloan reminds us that programming can be — is — more than a line on a resume. It is something that everyone can do, and want to do, for a lot of different reasons. It would be great if programming “were marbled deeply into domesticity and comfort, nerdiness and curiosity, health and love” in the way that cooking is. That is what makes Computing for All really worth doing. — Eugene Wallingford

Ton writes I am the programming equivalent of a home cook.

I would never call myself a ‘real’ programmer, or a programmer at all really. Yet, I’ve been programming stuff since I was 12. — Ton Zijlstra

I have a degree in Computer Science, earned in an era where Java and then a few academic programming languages were taught. I graduated with classmates who couldn’t really program at all. And so I have the same feeling: Am I actually a ‘programmer’?

Talking to Brooke at our team retreat last week, she learned 22 programming languages in a handful of months.

As I thought about programming languages, I wasn’t really sure which languages I could say I “know”, or can actually write new code in from scratch. This will turn into a separate blog post with a little trip down memory lane. Brooke has assured me that if I actually coded full time, I’d be a solid intermediate developer — although mainly because of my systems experience in servers and DNS and general abilities in tracing down error messages with web searches.

Ton included a lovely picture of he and I home cooking side by side in my old Vancouver apartment, way back in 2008.

Let’s loop back to Sloan’s article:

…my app doesn’t need a login system. It doesn’t need an interface to create and manage contacts. It already knows exactly who’s using it.

He then goes on to reference Clay Shirky’s Situated Software from 2004 — “Situated software, by contrast, doesn’t need to be personalized — it is personal from its inception”.

Sloan (Ton quoted this too):

And, when you free programming from the requirement to be general and professional and scalable, it becomes a different activity altogether, just as cooking at home is really nothing like cooking in a commercial kitchen.

If we expect home cooking to happen, we must build tools designed for home cooking. Home cooking from inception.

That gas-powered apartment stove that Ton and I cooked at, it was “designed” for landlords to supply the narrowest, cheapest stove possible to buy.

At the same time, I distinctly remember The Black Hoof in Toronto having two electric coil “apartment” stoves as their main cooking implements.

Masterpieces can be made with home tools, but an industrial oven cannot be moved into an apartment. And that’s how software is designed and deployed today.

I am supremely annoyed that Drupal or WordPress can’t easily be self-hosted by average users: the amount of experience and maintenance required to keep an entire LAMP stack running — and more importantly, secure! — means that you have to pay someone else to host it for you. I consider this to be a failure on my part: I poured a lot of energy into the Drupal code and community so that people could self-host, and self-publish, and blogging and groups and communities arose out of this. And now we’re back to needing to pay someone, probably at least $10-$20 per month, just to keep the lights on.

We’ve been writing LAMP stack applications for 30 years or so. Linux (operating system), Apache (web server), MySQL (database), and PHP/Python/Perl (application code). Some of the initials have changed (NodeJS, Ruby on Rails), but the pattern remains the same: care and feeding from operating system to web server.

I wrote an article called The New Hack Stack in 2012. In it, I advocated for Platforms-as-a-Service (PaaS). The provider takes care of maintaining the operating system and the web server, provides plugins for many different add-ons including databases, and then you’re left only needing to maintain or update application code.

My particular favourite in this category is Heroku. In my general quest to not have to maintain operating systems or web servers, I’ve spent a lot of time hacking on open source applications so they can be one-click deployed to Heroku.

I have to give up on some of them: these applications are not designed in such a way that it is easy to deploy them to Heroku. They are designed such that step one is “login to your Linux server, mess around with the operating system, and configure your web server”. Perhaps OK for another developer, but why are they designing them this way, to not make it easy for “users” to deploy?!?

I didn’t know it at the time, but the design pattern of the 12 Factor App had just been authored in 2011, by the Heroku founders.

Not so coincidentally, many of the members of the original Heroku team are doing very interesting things at industrial research lab @inkandswitch, including writing last year about Local first software that goes all the way to insisting that software should work for users locally, under their control.

I can see these new shifts beginning to happen. We can build tools for home cooking. But we have to design them that way from the beginning.

blog.bmannconsulting.com/tools-for…

23 Feb 04:01

Twitter Favorites: [bmann] Reading @robinsloan's "An app can be a home-cooked meal" got me bouncing around the blogosphere of @ton_zylstra &… https://t.co/dq6hCxoF28

Boris Mann @bmann
Reading @robinsloan's "An app can be a home-cooked meal" got me bouncing around the blogosphere of @ton_zylstra &… twitter.com/i/web/status/1…
23 Feb 03:58

Semantic markup, browsers, and identity in the DOM

by David Baron

HTML was initially designed as a semantic markup language, with elements having semantics (meaning) describing general roles within a document. These semantic elements have been added to over time. Markup as it is used on the web is often criticized for not following the semantics, but rather being a soup of divs and spans, the most generic sorts of elements. The Web has also evolved over the last 25 years from a web of documents to a web where many of the most visited pages are really applications rather than documents. The HTML markup used on the Web is a representation of a tree structure, and the user interface of these web applications is often based on dynamic changes made through the DOM, which is what we call both the live representation of that tree structure and the API through which that representation is accessed.

Browsers exist as tools for users to browse the Web; they strike a balance between showing the content as its author intended versus adapting that content to the device it is being displayed on and the preferences or needs of the user.

Given the unreliable use of semantics on the Web, most of the ways browsers adapt content to the user rarely depend deeply on semantics, although some of them (such as reader mode) do have significant dependencies. However, browser adaptations of content or interventions that browsers make on behalf of the user very frequently depend on the persistent object identity in the DOM. That is, nodes in the DOM tree (such as sections of the page, or paragraphs) have an identity over the lifetime of the page, and many things that browsers do depend on that identity being consistent over time. For example, exposing the page to a screen reader, scroll anchoring, and I think some aspects of ad blocking all depend on the idea that there are elements in the web page that the browser understands the identity of over time.

This might seem like it's not a very interesting observation. However, I believe it's important in the context of frameworks, like React, that use a programming model (which many developers find easier) where the developer writes code to map application state to user interface rather than having to worry about constantly altering the DOM to match the current state. These frameworks have an expensive step where they have to map the generated virtual DOM into a minimal set of changes to the real DOM. It is well known that it's important for performance for this set of changes to be minimal, since making fewer changes to the DOM results in the browser doing less work to render the updated page. However, this process is also important for the site to be a true part of the Web, since this rectification is important for being something that the browser can properly adapt to the device and to the user's needs.

23 Feb 03:58

Changing text color and the special issue this represents for Mac users.

by Matt Harris
Thunderbird 68 removes the limit on the number of colours available and now offers whatever the operating system of the device offers.  This is usually more than 60 million.


You are using Apple,  the complex things you see are part of your apple operating system.  Windows users see a colour picker that they pick a colour from because that is the tool windows offers up when asked.

Both operating systems should put up an equivalent to this

The windows dialog you see when you click the colour bar to set another colour looks like this







Apple has something that looks like this as their operating system colour picker






They offer instructions here on using this part of their operating system.

The special problem MAC users appear to have is they do not close the colour picker dialog and then complain that the colours in Thunderbird do not update. So, close the dialog.  That is all you have to do to make it work.
23 Feb 03:55

Semantic markup, browsers, and identity in the DOM

by David Baron

HTML was initially designed as a semantic markup language, with elements having semantics (meaning) describing general roles within a document. These semantic elements have been added to over time. Markup as it is used on the web is often criticized for not following the semantics, but rather being a soup of divs and spans, the most generic sorts of elements. The Web has also evolved over the last 25 years from a web of documents to a web where many of the most visited pages are really applications rather than documents. The HTML markup used on the Web is a representation of a tree structure, and the user interface of these web applications is often based on dynamic changes made through the DOM, which is what we call both the live representation of that tree structure and the API through which that representation is accessed.

Browsers exist as tools for users to browse the Web; they strike a balance between showing the content as its author intended versus adapting that content to the device it is being displayed on and the preferences or needs of the user.

Given the unreliable use of semantics on the Web, most of the ways browsers adapt content to the user rarely depend deeply on semantics, although some of them (such as reader mode) do have significant dependencies. However, browser adaptations of content or interventions that browsers make on behalf of the user very frequently depend on the persistent object identity in the DOM. That is, nodes in the DOM tree (such as sections of the page, or paragraphs) have an identity over the lifetime of the page, and many things that browsers do depend on that identity being consistent over time. For example, exposing the page to a screen reader, scroll anchoring, and I think some aspects of ad blocking all depend on the idea that there are elements in the web page that the browser understands the identity of over time.

This might seem like it's not a very interesting observation. However, I believe it's important in the context of frameworks, like React, that use a programming model (which many developers find easier) where the developer writes code to map application state to user interface rather than having to worry about constantly altering the DOM to match the current state. These frameworks have an expensive step where they have to map the generated virtual DOM into a minimal set of changes to the real DOM. It is well known that it's important for performance for this set of changes to be minimal, since making fewer changes to the DOM results in the browser doing less work to render the updated page. However, this process is also important for the site to be a true part of the Web, since this rectification is important for being something that the browser can properly adapt to the device and to the user's needs.

23 Feb 03:54

Weeknote 08/2020

by Doug Belshaw

This week has been half-term for my kids, so I’ve been working less. Although it didn’t pan out exactly this way, the plan was to keep the same working days for Moodle and We Are Open co-op, but just work half days. My thinking was that this would allow me to keep up with the projects I’m working on, and also spend time with the family. As it happens, this approach has left me feeling like I’ve neglected both a bit.

Last week, I explained how my son had suffered from some trauma to his neck. I’m absolutely delighted to report that now, almost two weeks after the injury, he’s back to (carefully!) playing football with his sister and me. Some parts of his left hand have still have reduced sensation and he can’t turn his neck all the way to the left, but his recovery this week has been pretty staggering.


On the MoodleNet side of things, I’ve been finalising details around the budget for this year. Up until now, budgets have been centralised at Moodle, so it’s great for the team to have some direct control over resourcing. As we received around 90% of what we asked for (pretty standard practice, I’d say) we’ll have to bring forward some of our plans to make MoodleNet sustainable in a way that isn’t annoying or creepy.

What MoodleNet currently looks like (staging server)

I’ve learned with this project not to make promises about exactly when things will be ready. That being said, we’re probably a few weeks away from federation testing, which I’m looking forward to.

In addition to this, we’ve been working closely with the Moodle LMS team around integration for the two platforms, ready for their 3.9 release in May. Things are going well in that regard.


For the co-op, I’ve been working on a project which will be launched soon. It’s a community for aspiring open leaders within the public sector, and has had me revisit some work I’ve done over the last decade.

A slide from one of the decks I’ve been working on this week

This is a joint venture with LDR21 and sponsored by Red Hat. I’ve been collaborating mainly with Laura on some workshop resourcing around the fundamentals of working openly. It’s always interesting revisiting the ‘why’ behind the ‘what’ of your everyday working life.


I received my new phone this week, a OnePlus 7 Pro 5G. It’s a beast in every sense of the word: larger and heavier than my previous phone, but with three cameras, insane amounts of RAM and storage space, and a 90hz full-screen display. Given where we live, the 5G isn’t much use to me right now, but I’m future-proofing…

OnePlus 7 Pro 5G

This has meant we’ve done a hermit crab-style upgrade, with my son inheriting my OnePlus 5 and passing on his OnePlus One so that my daughter now has her first phone. She’ll get a SIM for it in time for next academic year.

Inexplicably, my new phone doesn’t have wireless charging built-in. So I read this guide and added it myself with a super-thin charging receiver that fits underneath the phone case and plugs into the USB-C port. It’s actually pretty unnoticeable, protects the USB port from dust and dirt, and works really well.


Due to my son’s injury and him previously not being able to do much in the way of physical activity, we brought our PlayStation 4 downstairs and attached it to our TV. (It’s usually hooked up to a projector in the room next to the master bedroom.)

Ultimate Chicken Horse

A game that we’ve greatly enjoyed playing is Ultimate Chicken Horse. It’s as daft and fun as the name suggests, and we’ve had a whale of a time playing it together this week! It’s up there with Party Golf for fun multiplayer games.


Next week, it’s back to a regular week of working full days for Moodle on Monday, Tuesday, and Friday, and the co-op on Wednesday and Thursday. My wife’s parents are coming up to visit next weekend, but other than that it’s business as usual.


Header image: photograph of the back of my home office door showing part of Inappropriate Guidelines for Unacceptable Behaviour. The partial quotation to the right reads “Anxiety is the dizziness of freedom” (Kierkegaard).

23 Feb 03:53

amospoe: “There are roughly three New Yorks. There is, first,...



amospoe:

“There are roughly three New Yorks. There is, first, the New York of the man or woman who was born here, who takes the city for granted and accepts its size and its turbulence as natural and inevitable. Second, there is the New York of the commuter—the city that is devoured by locusts each day and spat out each night. Third, there is the New York of the person who was born somewhere else and came to New York in quest of something. Commuters give the city its tidal restlessness; natives give it solidity and continuity; but the settlers give it passion.”

—E. B. White, Here is New York

(photo: devon dikeou)

23 Feb 03:52

whispers of what was remembered

by Dave Pollard

 .                                                                                                                                                                          [to frankie]

there is already no one

there is already no seeking

there is already only this

not in time, but not eternal — only always

not in space, but not infinite — only everywhere

there is already only what is appearing, not any thing, not any happening,
.                    just unfolding

it is already over
.                           (all my trials…)

there is already no thing, and no thing apart

there is already no life, no birth, no death, no events, no continuity, nothing passing, nothing changing, nothing lost or gained

everything is already always new

there is already no meaning, no purpose, no causality, no agency, no how or why,
no need for any of that

there is already no relationship, no necessity for anything to be other than this

this is already obvious, always here, always just this, open to be seen
.                      (but not by any one)

there is already nothing needed, nothing to be done,
nothing that must or should or might be done

◊◊◊

what then is left?

no thing

and everything

a stunning, full-on embrace: this

unimaginable

wonder


[painting by BC artist Terry Kolber]

23 Feb 03:43

Once Again, My Miracle on Ice Story

by Rex Hammock

18 years ago today (February 22, 2002), I wrote a blog post about being in the arena for the “Miracle on Ice”: the USA vs. USSR Olympic match on February 22, 1980.

That first post was written on the 22nd anniversary of the match. And today (2.22.20) is the 40th anniversary of the match.

That’s a miracle to me.

Ten years ago, I updated that second post slightly. To keep you from having to click again, here’s the post from 18 years ago with some slight time-related changes.


I remember exactly where Ann and I were on February 22, 1980. It’s one of the most memorable days of my life. But then, most Americans remember that day: the Miracle on Ice day when the seventh-seeded 1980 U.S. Hockey Team (Team USA) met the mighty top seed legends from the USSR in that forever famous semifinals-round match. When they learned who won (or, if they were really patient and waited to see the tape-delayed broadcast of it), almost every American alive then can tell you where they were at that exact moment. In 1999, ESPN.com users voted it the greatest game of the century.

I know Ann and I will remember where we were. I know we’ll never forget. We were two of the 7,000 fans screaming in delirium inside the Lake Placid Olympic Arena. Ann and I were lucky to be there, to say the least. Her mother* had always wanted to go to an Olympic Games. And since her Dad did not share that dream and has a legendary aversion to flying, Ann, her siblings and I got to accompany Harriet to her first-ever games. (But not her last, as she’s in Salt Lake City today and has been to many, if not most, of the winter and summer games since 1980. In other words, she has a heck of a pin collection.)

We were part of a package tour group that met up in New York City on Wednesday, the 20th, in front of a hotel in the east 50s.

There, we joined a bus full of fellow-fans, including, as we learned later, several relatives of Linda Fratianne, the American figure skater who won the silver medal 22 years ago last night. A bus took us to our motel 100 or so miles from Lake Placid.

In 1980, the Olympics were big, but not BIG by today’s standards. It was the 1988 Los Angeles summer games in which Peter Uberoff created the modern corporate-sponsored mega-event. Lake Placid was a one-stop-light village much smaller than even Park City, Utah. I’m sure there was commercialism everywhere, but the only corporate presence I can recall is a guy handing out free cans of Copenhagen smokeless tobacco. I passed on the offer.

On Thursday morning, we took a two-hour bus ride to the mountain where the men’s slalom was being run. The course was near the top of the mountain, and unfortunately, the line waiting for the ski lift up to the course was backed up a half-mile or so. So Ann and I joined others in a hike up the mountain. Now, this was before I had taken up skiing and had I known what I know now about ski apparel and such, I would have avoided the next hour and a half of unforgettable pain and agony. Without going into detail, let me just say, if you’re from Tennessee and just wearing bluejeans and hunting boots, don’t go hiking up a mountain in three feet of snow.

We made it to the top of the mountain in time to see Phil Mahre win a silver medal. At least we were told that speck skiing down the course a few hundred yards away was Mahre. Ann and I were more thrilled with the small heated building we found in which we could regain some feeling in our hands and feet. I don’t recall how we got back down the mountain, but I’m sure it involved some more frostbite.

While the details of the rest of the day are still foggy, I believe they included seeing the medal ceremony in which Eric Heiden was awarded his fifth gold medal and later that evening, having a comfortably warm seat at the women’s figure skating finals. Since by then, we were close chums with the Fratiannes, we were disappointed when she didn’t make it to the top podium (obviously a victim of an eastern block judging conspiracy). I believe we attended a ski-jump event on the morning of the 22nd, but by then we were focused, like all Americans, on the hockey game that afternoon. I recall there were rumors that ABC had tried to get the IOC to change the hockey game for the later, night slot so that it could be shown in the U.S. during prime time. There would have been a riot in the streets of Lake Placid had that happened, as the tickets were not for a specific game but for a specific time-slot. In other words, our tickets were for the late afternoon semifinals game on Friday. It was sheer luck of the draw that our tickets turned out to be tickets to THE game.

Like everyone that year, Ann and I had grown more and more wrapped up in the games during its first week-and-a-half. And even though I did not understand off-sides or icing, I still got caught up in the Team USA mania. The often explained context of the Olympic Games that year can’t be overstated: We Americans were gripped with self-doubt and fear. We were in the midst of an oil-shortage and recession, ballooning inflation and interest rates, the Iranian hostage crisis and the invasion of Afganistan by the USSR.

Jimmy Carter was not helping things as he had declared that he would not leave Washington while the hostages were being held, a move that seemed naive at the time and even more ridiculous today. He seemed like a wimp and it made Americans fear we were all turning into wimps.

But then Team USA appeared. The new chant “USA, USA” started up that first week of the games. The guys who were later to be named Sports Illustrated sportsmen of the year were all unknown college kids at first, but then we started to learn who they were. Jim Craig, the goalie, and his relationship with his father became overnight mythology. Mike Eruzione, the team captain, became as recognized as Michael Jackson. Coach Herb Brooks’ quote, “Gentlemen, you don’t have enough talent to win on talent alone,” became one of the best-known statements ever made.

While we were all excited as we made our way into the arena, I recall vividly that I was not at all hopeful that Team USA would win. I thought, however, that it was just exciting to think that they may win a medal and that I was going to get to watch them play the legendary Soviet team.

Our seats were in the upper level of the arena, which was still nearly rink-side compared to today’s nose-bleed seats. Remember, the Lake Placid arena only holds about 7,000 spectators. As I recall, they did not have enclosed press boxes or any type of suites which meant that ABC’s hockey commentator Al Michaels (who also got famous those two weeks) was perched on a temporary stand built over some seats about 15 yards from where we were sitting. I could look over and see him screaming.

Ann and I screamed our heads off as Team USA stayed close to the Soviets, never taking the lead, but never being out of reach. Then, with ten minutes left in the third period, Mike Eruzione scored a goal to put us ahead 4-3. Ten minutes. Ten of the longest minutes of my life. Ten minutes in super slow motion. Screaming and jumping and screaming, I still felt it could not be possible for the U.S. to win. Surely, those communist supermen would find a way to score in the final moments.

But they didn’t.

The fans all sat there in shock. Screaming but in shock. We didn’t know what to do. We didn’t want to leave the arena, but soon the celebration poured out onto the main street of Lake Placid. I couldn’t talk for 24 hours.

The victory did not insure the gold medal for the USA. A 4-2 victory on Sunday against Finland nailed that. We were watching that game on a television a few blocks away when Jim McKay announced that anyone who was in Lake Placid could come to the arena for the awards ceremony. We took him up on his offer and went back to the arena where we screamed our heads off once more.

*My mother in law passed away recently. I haven’t attended another Olympics game, but she attended several more, both winter, and summer. And she won the gold medal for mothers-in-law.

23 Feb 03:43

Instapaper Liked: An Unsettling New Theory: There Is No Swing Voter

Magazine An Unsettling New Theory: There Is No Swing Voter Rachel Bitecofer’s radical new theory predicted the midterms spot-on. So who’s going to win 2020?…
23 Feb 03:43

Instapaper Liked: Canada is fake

I just returned home from a trip to New York, where everyone I met was very excited to meet a Canadian in real life. “What’s Canada like?” they asked, looking…
23 Feb 03:42

Instapaper Liked: How urban design affects mental health

Cities are human habitat, evolved over thousands of years to be in balance humans. The radical experiment of the past century to reshape the places we live has…
23 Feb 03:40

Google Is Testing Double Tap Gestures on the Rear of Pixel Phones in Android 11

by Mahit Huilgol
Over the past several years we have seen different ways in which users could interact with their smartphones. We have seen squeezable frames, motion sensors and also an ability to navigate by using fingerprint sensors. Google recently launched the first preview of Android 11 and the folks at XDA-Developers have unearthed a new gesture in the code. Continue reading →
21 Feb 03:25

What is Mobile PureOS?

by Sebastian Krzyszkowiak

Since I’ve seen plenty of misconceptions flying around, let’s go through a quick sum up of what is included in PureOS, the default GNU/Linux distribution installed on the Librem 5.

tl;dr: it’s pretty much Debian Stable with GNOME with Purism’s phosh, phoc, libhandy, Calls, and Chats, with some amount of adaptive apps, backports, and cosmetic patches

One stack to rule them all

In the past, mobile GNU/Linux distributions were diverging from their desktop equivalents quite a lot. Frameworks like freesmartphone.org offered a comprehensive set of features expected from a mobile phone; however, they didn’t even try to integrate with existing infrastructure present on desktop platforms like GNOME or KDE – and it was perfectly understandable, since the requirements of those platforms were drastically different and main focus was supporting one thing well. Some later efforts, such as oFono, also failed to gain traction on the desktop – some popular distributions still don’t have it packaged in their repositories.

Even the Linux kernel wasn’t really ready for mobile’s prime time yet – at least on the mainline. Some mobile-specific features lacked common APIs, so userspace had to be able to support lots of various approaches found across various Linux forks made for different devices. While FSO tried to abstract that out, this presented an additional hurdle in getting those things upstreamed into major desktop frameworks that understandably weren’t really interested in maintaining code that aims at a moving target.

Fortunately, the situation has changed over time. Now, in 2020, it turns out that plenty of features that used to be exclusive to mobile phones 10 years ago had already found their way into the desktop stack. In the age of 2-in-1 convertibles, your laptop is likely to have orientation and ambient light sensors supported by iio-sensor-proxy; you can plug a 4G dongle into its USB port and have ModemManager use it for internet connection; it may even support connected suspend, which becomes very similar to how mobile phones use to do their power management. Touchscreens with high pixel density are common and well-supported. Where you had to invent your own solutions in the past, there are common and mature cross-desktop APIs now.

Since one major reason behind the choice of technology in PureOS is convergence, we try to use the same stack on the phone as we already do on desktops. Packages are installed and updated via apt and dpkg; telephony is handled by ModemManager; gnome-session and gnome-settings-daemon take care of your graphical environment etc. We diverge only where absolutely necessary (mostly the shell and bootloader) but actively try to avoid having to reinvent the wheel. Thanks to this approach improvements made for phones can be also used on desktops and vice versa. Just plug a 4G dongle to your laptop, run Chatty and send SMS to your friends – it’s as simple as that!

libhandy

Unlike some other mobile environments, PureOS hasn’t created a new UI framework to use on mobile. This allows us to reuse all the applications you already know from the desktop. But wait! Of course, this doesn’t mean that there’s no work needed on them – unfortunately, desktop apps aren’t typically written with small touchscreens in mind; and because GTK3 is already in maintenance mode with no new features being developed, it’s hard to address some toolkit shortcomings there as well. That’s where Purism’s libhandy comes in – a library with a set of GTK widgets that help GTK applications adapt to various form factors. libhandy is not a platform, it’s just a library that augments GTK with additional widgets that help with common problems that pop up when trying to adjust to available screen space. You can – and often should – use libhandy for applications that aren’t even supposed to run on a phone, since that may make your app still work well when its window gets small, for instance on a crowded tiling window manager on 11” laptop screen.

Eventually libhandy is supposed to go away (or at least change its purpose), since the goal is to get all these goodies upstreamed into upcoming GTK4.

The shell

Despite of mobile PureOS being pretty much a desktop distribution running on a computer that just happens to be a bit smaller than usual, there are still some differences from the typical desktop GNOME stack that you may be using on your PC. There’s haegtesse (on the devkit) and wys (on the phone) that configure audio routing through PulseAudio – however, they’re pretty much a simple pieces of cyber-plumbing; just go check the number of LOC there, or even read the complete source code.

There is one notable part that actually differs from the typical desktop stack quite a lot – and it’s the shell. Why is that?

GNOME Shell is not designed for 5” touchscreens and its monolithic structure as a Mutter module doesn’t make it easy to experiment with new UI paradigms. It doesn’t even use GTK, which makes all the things provided by libhandy not applicable there. As per upstream GNOME developers’ suggestion, we have created a new shell that serves as a playground for mobile UI implementation and as a step towards possible future GNOME Shell replacement that would be able to work on both desktops and mobiles (Guido, one of our core developers, is successfully using it as his main desktop shell for some time now).

However, it doesn’t mean that we’re re-implementing everything from scratch. GNOME Shell became split into three different parts: phosh is the shell; phoc is the Wayland compositor; and squeekboard is the on screen keyboard. They are all based on common interoperable protocols and components: phosh uses GTK and plenty of GNOME infrastructure (it also implements Mutter’s dbus interfaces); phoc uses wlroots, which is a common library behind various Wayland compositors (most notably sway); and all three of them use the layer-shell Wayland protocol, which makes it possible to run, say, phosh on sway or MATE on phoc. We have also worked on improving some parts of the ecosystem, like virtual keyboard and text entry Wayland protocols that start to gain adoption in other implementations as well.

In the end, phoc and phosh are just small parts of the whole machine ready to be replaced when the time is right, most likely with whatever succeeds GNOME Shell, perhaps it could be phosh 😉

PureOS Amber

PureOS is a Debian GNU/Linux derivative; amber is its current stable version. It’s pretty much equivalent to Debian Buster with some packages patched in order to provide customized out-of-box experience on Purism laptops and to comply with Free Software Foundation guidelines. Packaging for PureOS is generally avoided in favor of packaging straight to Debian, and PureOS archives are automatically synchronized with Debian wherever possible (so you quickly get updates from buster-security into amber-security, for instance).

As mentioned above, amber derives from buster – so, it’s essentially a stable distribution that doesn’t change a lot (if at all).

Note: There’s also PureOS Byzantium, which automatically tracks Debian Testing.

amber-phone

While a stable base is a good and desired thing to have on a phone, we’re still in an early development phase where things move very quickly. Debian Buster (and therefore PureOS Amber) contains GNOME 3.30, which is too old to sensibly adapt to a small screen of a phone – so we need something newer. Tight development timeline also calls for using some temporary mobile-specific hacks that may not be perfectly suited for desktops. To give us a possibility to do those things, amber-phone suite was created, which is a thin repository overlay that augments amber (and its amber-updates and amber-security) with some backported or patched stuff that shouldn’t go into every PureOS Amber-based PC out there.

The default repository list on the phone looks like this:

deb https://repo.pureos.net/pureos amber main
deb https://repo.pureos.net/pureos amber-updates main
deb https://repo.pureos.net/pureos amber-security main
deb https://repo.pureos.net/pureos amber-phone main

There is also amber-phone-staging, which is simply a place where updated packages are incubated in for (currently) three days before being migrated to amber-phone to make QA easier. If you prefer to live on a bleeding edge, you can enable it by adding:

deb https://repo.pureos.net/pureos amber-proposed-updates main
deb https://repo.pureos.net/pureos amber-phone-staging main

On the Librem 5, PureOS is by default installed into two partitions: small ext2 /boot partition and larger ext4 / one. It looks like, sounds like, and feels like a regular PureOS (or Debian) installation, so there’s no need to go into detail there.

Let’s have a quick overview of which packages (and why) are there in amber-phone:

Backported newer upstream versions or Debian revisions:

– adwaita-icon-theme
– appstream-glib
– cogl
– epiphany-browser
– flash-kernel
– gstreamer1.0
– gnome-2048
– gnome-bluetooth
– gnome-chess
– gnome-taquin
– gsettings-desktop-schemas
– iagno
– iio-sensor-proxy
– libfolks
– libgom
– librust-serde-yaml
– librust-xkbcommon
– libsdl2
– libqmi
– libxmlb
– lollypop
– mesa
– meson
– modemmanager
– pixman
– pulseaudio

Packages above may contain some patches that are still waiting to be upstreamed or have been already upstreamed but in later versions than what we’re using there. This is the category that is the most likely to grow in the future.

Backported packages with temporarily non-upstreamable patches:

– evince
– gcr
– gedit
– gtk+3.0
– gnome-clocks
– gnome-calculator
– gnome-contacts
– gnome-control-center
– gnome-initial-setup
– gnome-online-accounts
– gnome-settings-daemon
– gnome-software
– gnome-usage
– upower
– yelp

Those patches need more work to be upstreamed, as they may be unsuitable for desktops right now (for instance automaximization of dialogs and back buttons instead of close buttons in GTK or disabled panels in gnome-control-center that are unnecessary on a phone but perfectly useful on a desktop). The plan is to eventually get all these changes upstreamed, it will just take a bit longer – so this list should eventually shrink to zero.

Packages that don’t exist in Debian Buster:

– animatch
– calls
– chatty
– feedbackd
– gen-sshd-host-keys
– gtherm
– haegtesse
– kgx
– libhandy
– librem5-base
– librem5-dev-tools
– librem5-package-info-tool
– librem5-quick-start-guide
– librem5-quick-start-viewer
– librem5-user-docs
– linux-image-librem5
– plymouth-theme-librem5
– purple-carbons
– purple-lurch
– purple-mm-sms
– sound-theme-librem5
– squeekboard
– uuu
– virtboard
– wys

There are also phoc and phosh, but since these don’t exist in Debian Buster and are also useful on desktops, they are made available in amber-updates instead of amber-phone to make installing them on regular desktop PureOS easy (similar thing may happen with other apps in the future, like King’s Cross or Chatty). These packages should eventually find their way into Debian as well (and some already landed into Bullseye).

A notable omission right now is the bootloader – u-boot – which isn’t distributed as part of the distribution yet, but externally added to the image by image-builder. This is going to change in the future.

Live long and prosper

There are other things as well that weren’t mentioned above – some parts of the stack are yet to be created (or just has been, like feedbackd – a daemon for handling audible, visual and haptic feedback), but just like those mentioned earlier, these things will be created with upstream GNOME in mind, making sure that they will be easily able to grow and be useful way outside of the Librem 5 context (or even GNOME where applicable). After all, GNU/Linux is likely to still be there and it will still benefit from improvements made for mobile phones even if all GNU/Linux phones suddenly disappear, especially in the age of 2-in-1 laptops with mobile broadband connections.

 

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the-people stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Preorder now

The post What is Mobile PureOS? appeared first on Purism.

21 Feb 03:12

Libra: Succinct Zero-Knowledge Proofs with Optimal Prover Computation

by Jiaheng Zhang and Dawn Song

This blog post is based on a paper authored by Tiancheng Xie, Jiaheng Zhang, Yupeng Zhang, Charalampos Papamanthou and Dawn Song.

TL;DR Libra is a zero-knowledge proof protocol that achieves extremely fast prover time and succinct proof size and verification time. Not only does it have good complexity in terms of asymptotics, but also its actual running time is well within the bounds of enabling realistic applications. It can be applied in areas such as blockchain technology and privacy-preserving smart contracts. It is currently being implemented by Oasis Labs.

Introduction and Motivation

Zero-knowledge proofs (ZKP) are cryptographic protocols between two parties, a prover and a verifier, in which the prover wants to convince the verifier about the validity of a statement without leaking any extra information beyond the fact that the statement is true. For example, the verifier could confirm that the prover computes some functions ‘F(w) = y’ correctly even without knowing the input ‘w.’ Since they were first introduced by Goldwasser et al. [1], ZKP protocols have evolved from pure theoretical constructs to practical implementations. They have achieved proof sizes of just hundreds of bytes and verification times of a few milliseconds, regardless of the size of the statement being proved. Due to this successful transition to practice, ZKP protocols have found numerous applications, not only in the traditional computation delegation setting, but also in blockchain settings such as providing privacy of transactions in deployed cryptocurrencies (e.g., Zcash [2]).

And a ZKP protocol must meet three criteria: completeness, soundness, zero knowledge. See below for an explanation.

Completeness, soundness, and zero-knowledge requiremnets for zero knowledge proof.

An explanation of the formal security requirements for zero-knowledge proofs.

Despite such progress in practical implementations, ZKP protocols are still notoriously hard to scale for large statements due to a particularly high overhead on generating the proof. For most protocols, this is primarily because the prover has to perform a large number of cryptographic operations, such as exponentiation in an elliptic curve group. And to make things worse, the asymptotic complexity of computing the proof is typically more than linear, e.g., ‘O(C log C)’ or even ‘O(C log^2 C),’ where ‘C’ is the size of the statement. Therefore designing ZKP protocols that enjoy linear prover time as well as succinct proof size and verification time is an open problem.

Resolving this problem has significant practical implications. For example, we could generate the proof for larger statements on blockchain within acceptable time bound, which could also further extend the privacy of transactions or smart contract.

Enter Libra (not affiliated with the Libra project launched by Facebook)

Originally proposed in a paper from early 2019, Libra solves the problem described above with a zero-knowledge proof protocol that has three important properties:

  1. Optimal prover time: Libra only needs time that is linear in the statement size to generate a proof.
  2. Succinct verification time and proof size: both of the proof size and the verification time in Libra are logarithmic in the statement size.
  3. Universal trusted setup: Libra only needs a one-time trusted setup to generate the public parameters which can be used for all statements to be proved, which explains the term “universal”.

The underlying protocols of Libra are an interactive proof protocol proposed by Goldwasser, Kalai, and Rothblum, in [5] (referred to as GKR protocol), and the verifiable polynomial delegation (VPD) scheme proposed by Zhang et al. in [6]. It comes with one-time trusted setup (not per-statement trusted setup) that depends only on the size of the input (witness) to the statement that is being proved.

In the original GKR protocol, the prover could only generate a proof of the statement without satisfying the zero-knowledge property. That means the verifier could learn secret information of the prover from the proof itself, which we want to avoid. In addition, in the GKR protocol, the computation of the statement is based on the arithmetic layered circuit(layered circuit with only addition and multiplication gates) and the time of the prover to generate the proof is polynomial in the number of gates in this circuit. This is slow if the circuit size is large.

Libra’s contribution is solving these two problems in the GKR protocol and could be summarized as follows:

  1. GKR protocol with linear prover time. Libra features a new linear-time algorithm to generate a GKR proof. Our new algorithm does not require any specific structure in the circuit and our result subsumes all existing improvements on the GKR prover which assume special circuit structures, such as regular circuits in [7], data-parallel circuits in [7, 8], circuits with different sub-copies in [9].
  2. An efficient approach to turn Libra into zero-knowledge. We show a way to mask the responses of our new linear-time prover with small random polynomials so as to meet the zero-knowledge property. This new zero-knowledge variant of the protocol introduces minimal overhead on the verification time compared to the original (unmasked) GKR protocol.

Comparison with existing ZKP protocols

Table 1 shows a detailed comparison between the asymptotic complexity of Libra and the existing ZKP protocols. A first observation is that Libra has the best prover time among all existing protocols, indicated as row P on the table. In terms of asymptotics, Libra is the only protocol that satisfies all of the following properties simultaneously: linear prover time, succinct verification, and succinct proof size structured circuits. The only other protocol with linear prover time is Bulletproofs whose verification time is linear, even for structured circuits. In the practical front, Bulletproofs’ prover time and verifier time are high due to the large number of cryptographic operations required for every gate of the circuit.

The proof and verification of Libra are also competitive to other protocols. In asymptotic terms, our proof size is only larger than libSNARK and Bulletproofs, and our verification is slower than libSNARK and libSTARK. Compared to Hyrax, which is also based on similar techniques with our work, Libra improves the performance in all aspects with one-time trusted setup.

Table comparing Libra to existing ZKP protocols.

Comparison of Libra to existing ZKP protocols, where (G, P, V, |π|) denote the trusted setup algorithm, the prover algorithm, the verification algorithm and the proof size respectively. Also, C is the size of the circuit with depth d, and n is the size of its input.

Implementation and evaluation

Software. We implement Libra, our new zero-knowledge proof protocol, in C++. Our protocol provides an interface that takes as input a generic layered arithmetic circuit and generates a zero-knowledge proof according to the circuit and the input of the circuit. We support a class of 512 bit unsigned integers that improve on the performance of the GMP library in specific cases and use it together with GMP for large numbers and field arithmetic. We use the popular cryptographic library “ate-pairing” on a 254-bit elliptic curve for the bilinear map used in zero-knowledge VPD. We have released the code as an open-source system (https://github.com/sunblaze-ucb/Libra).

Hardware. We run all of the experiments on Amazon EC2 c5.9xlarge instances with 70GB of RAM and Intel Xeon platinum 8124m CPU with 3GHz virtual core. Our current implementation is not parallelized and we only use a single CPU core in the experiments, so we hypothesize that one can further improve the efficiency of the reported numbers. We report the average running time of 10 executions.

Methodology and benchmarks. We compare our GKR protocol to these variants on the benchmarks below:

  1. Matrix multiplication: Prover P proves to the verifier V that it knows two matrices whose product equals to a publicly known matrix. We evaluate on different matrix size from 4×4 to 256×256.
  2. Image scaling: P proves to V that it computes a low-resolution image by scaling from a high-resolution image correctly using the classic Lanczos resampling method. We evaluate by fixing the window size and increase the image size from 112x112 to 1072x1072.
  3. Merkle tree: P proves to V that it knows the value of the leaves of a Merkle tree that computes to a public root value. We use SHA-256 for the hash function. And we increase the number of leaves from 16 to 256 in experiments.

We report the prover time, proof size and verification time in Figure 1.

Graph comparing Libra to existing ZKP protocols.

Which brings us to the conclusion. As shown in Figure 1(a)(b)(c), the prover in Libra is the fastest among all systems in all three benchmarks we tested. Figure 1(d)(e)(f) show the verification time. Our verifier is much slower than libSNARK and libSTARK, which runs in 1.8ms and 28-44ms respectively in all the benchmarks. Other than these two systems, the verification time of Libra is faster, as it grows sub-linearly with the circuit size. We report the proof size in Figure 1(g)(h)(i). Our proof size is much bigger than libSNARK and Bulletproof but it is better than Aurora, Hyrax, Ligero and libSTARK.

Immediate Implementation. Tiancheng Xie and Jiaheng Zhang interned at Oasis Labs in summer 2019 and worked to implement Libra. The Oasis Team has plans to further develop it in the future.

The Next Step after Libra: Introducing Virgo!

Based on the exciting results of Libra we set to solve one vital limitation of our proposal, namely “the trusted setup”. In our new follow up project called “Virgo”, we propose a transparent ZKP protocol with even better prover time and verification time. The proof size becomes larger, but it is reasonable and still works well in practice. To be specific, the prover time of Virgo is O(C +n log n) while both of the proof size and the verification time are O(d log C + log^2 n) for a d-depth circuit with n inputs and C gates. Our scheme only uses lightweight cryptographic primitives such as random oracles and is post-quantum secure. Our implementation of the protocol, Virgo, shows that it only takes 50 seconds to generate a circuit computing a Merkle tree with 256 leaves, at least an order of magnitude faster than existing transparent schemes. The verification time is 50ms, and the proof size is 253KB, both competitive to existing transparent zero-knowledge proof protocols.

Summary. We have introduced Libra, a zero-knowledge proof protocol achieves extremely fast prover time and succinct proof size and verification time. Not only does it have good complexity in terms of asymptotics, but also its actual running time is well within the bounds of enabling realistic applications. Our cryptographic technique can be applied in other application areas such as blockchain technology and privacy-preserving smart contracts. To learn more about Libra, you can read our full paper here. You can view the code here.

We thank Sarah Allen, IC3 Community Manager, for her help in compiling this text.

21 Feb 03:11

Y Combinator, Not Lambda School, Is Unbundling Education

Byrne Hobart, The Startup, Medium, Feb 20, 2020
Icon

Reading this Twitter thread led me to this article, which, which says this: "If you were reinventing the Ivy League as a signaling-focused product, your stripped-down version might look like this: you invite a small cohort of talented people to move to a city for about three months, you host some social events so they get to know each other, you have them work on projects and you advise them on those, and afterwards you introduce them to a bunch of savvy rich people. In other words, you’d invent Y Combinator." The Twitter thread is just misguided. But the article taps into what is really the value proposition for Ivy League universities, and we should be asking how to transfer this value to everyone, not just the sons and daughters of the well-connected.

Web: [Direct Link] [This Post]
21 Feb 03:10

A closer look at how Waze develops its maps

by Ted Kritsonis

In any given week, there are up to 60,000 people contributing to Waze’s map development across the globe, and they’ve become growing communities of dedicated volunteers.

Most of Waze’s 540 employees are still based in Tel Aviv, where the app was first conceived in 2008, but it’s the community support that has sustained its dynamism since then. Drivers who interact with the app to report something are known as “Wazers,” and they have been crucial to making the map dynamic in real-time. But this army of active community participants is critical to why its navigation maps are considered to be one of the most accurate in the world.

“Mega Meetups” are integral to cultivating this symbiotic relationship, and they occur more often than many may realize. Outside of the local community meetups for Wazers themselves, these are the big ones that happen in five regions: North America, Asia-Pacific (APAC), Europe, Latin America and Brazil.

This year’s North American Mega Meetup took place in Miami from Feburary 8th to 9th, and MobileSyrup got a front-row seat into how this collaboration works.

Meeting together

The word “community” comes up often in a meetup like this. Up to 40 Waze staff were on hand, along with the 87 Wazers flown in from across Canada and the United States. Though in a different region, there were also two hailing from Mexico.

Wazers fall under five different communities. Editors are the biggest one in overall numbers, responsible for adjusting and improving maps for accuracy. They’re followed by the Beta testing group that tests new features and additions to the app. The Localization community translates everything into other languages (56 so far). The Partners community helps establish working relationships and collaborations with a municipality’s public and private sector. And lastly, there’s Carpool, a newer feature that has only been available in the U.S., Mexico, Brazil and Israel over the last three years.

Within these communities lies a hierarchy of seniority not unlike a graduated system of promotion. There are six levels of editors, including “area managers” looking after an entire province or state. Then there are local and global “champs,” editors above level six who achieve this status by way of a nomination and vote. This all happens within the communities, so Waze doesn’t bestow a title or promote anyone on its own.

While they did get to enjoy a weekend getaway in a posh Miami hotel in the city’s Brickell district, Waze doesn’t pay any of its contributors. They’re neither staff nor contract workers. Apart from the odd swag they may receive, all these editors, testers, translators and facilitators appear to be doing it out of sheer passion and altruism.

They’re also given a direct line of access — usually via email, but sometimes through Google Hangouts — to Waze executives overseeing each community. The goal is to act upon bugs and suggestions community members put forth, and Mega Meetups are a way to talk about what’s next for the app with mutual feedback.

Mapping in Canada

There were 10 Canadians on hand at the Meetup, each with years of experience working on the app as editors and beta testers. Since Waze first came to Canada just over a decade ago, 10,000 people in the country have edited a map at least once. There are currently 1,500 active editors nationwide, plus 405 beta testers.

Jason Mushaluk lives in Winnipeg, and started editing Waze maps in November 2012. Toronto native Vinujan Aravinthan started editing in Mar. 2014, while Montreal-based Philippe Royal also joined in 2014.

What started out as a curious indulgence turned into a major undertaking for all three men. They met and became friends by learning to use Waze’s editing and beta testing tools. Each of their paths began with mapping problems begging for a fix, and when Waze advertised its desire for volunteers to help map Canadian roads, they went for it. At his peak, Mushaluk was working as a paramedic during the day and editing for five hours per night afterward.

“None of us got into editing to make friends, but connecting with the community is something that I’ve been personally working on,” says Mushaluk. “(Aravinthan and Royal) really work well with those skills, and it helps when making contact with municipal authorities who have worked with Waze editors to improve map accuracy, like I have.”

That sort of collaboration is growing in major Canadian cities, according to all three men. It also includes public-private sector cases, like the 407 ETR toll highway in the Greater Toronto Area. Aravinthan says editors like him are often tipped off in advance as to when lane or ramp closures are set to begin and end for real-time updating. The same can be true of toll-free highways.

“We do have a point of contact at the Ontario Ministry of Transportation (MTO) and when they updated the speed limit on certain highways to 110 km/h, we wanted to know the exact points where it started and ended,” says Aravinthan. “But it wasn’t updated on the Highway Traffic Act, and we only try to add things that are enforceable, so after a back-and-forth, it was updated the following day and we had it in the map.”

Royal was a student in Montreal working at the Ministry of Transportation for Quebec when he first started editing in 2014, realizing that information he had in press releases could serve drivers well if plugged into Waze’s maps. “I was just looking for a way to use the data that I had, so that’s how I managed to get in touch with other editors in Montreal and got involved in the community,” says Royal.

Increasing collaboration

The irony was that everyday people were making connections with government entities before Waze staff did. While that wasn’t the case all the time, the community played an important role in Canada. The same has been true of adding features and tools that Waze users also demanded.

Arguably the most impactful in recent memory was including estimated costs for taking toll roads and integrating Google Assistant into the app (which is still not available in Canada). Both features were coveted by Wazers of all stripes, and reportedly a dominant fixture at Waze meetups throughout 2018-19.

At this meeting, the discussion largely centred around technical points related to road closures, though more specifically, on large events, be they organized events or crises. The NFL’s Super Bowl, which had just taken place in Miami the weekend before, is one example where road closures can be fluid. Impending natural disasters are another, much like Hurricane Dorian, for example.

“If one major roadway gets clogged with traffic, they need a backup solution, so we can push new routing in instantaneously because we already know what their plans are,” says Stav Salomon Sapir, Waze’s Localization Manager. “The map editors are super instrumental, and we’ve sometimes had cases where they sit in the police department all day during these major events helping them because they can see where the traffic is on the map.”

Assessing Carpool

Dani Simons, who heads up Waze’s public sector partnerships, says the company has 1,300 data-sharing government partners across the globe. Some use the mapping data to improve traffic operations, whereas others use it to improve road safety or for flood protection in areas prone to that potentiality.

Waze regularly documents case studies pointing out how its maps help cities with traffic, but there’s also been no shortage of criticism towards the company for flooding otherwise quiet side streets with excess traffic.

The company’s Carpool feature is actually a separate app, offering the option to either be a driver or passenger. The reimbursement system is set up to compensate drivers for mileage without a for-profit element to it. All rates are set and pegged to government figures, just like they are with the Internal Revenue Service (IRS) in the U.S. It’s not clear when it will become available in Canada.

“Carpool is a powerful tool, and it’s a way to reduce the number of people driving by themselves, which reduces the overall number of cars on the road,” says Simons. “It is very important that we not only have, say, 10,000 people using it in (a big Canadian city), they also have to be concentrated in pockets so when drivers open the app, they find someone who lives or works nearby and wants to share a ride.”

Simons didn’t share specific data points on how successful it is in the countries it currently runs in, but there’s a commitment to expand it. It also has stiff competition. Uber’s Ride Pass is a monthly subscription that reduces the cost of taking an UberX or Pool trip, though Montreal is the only city in Canada currently offering it. Lyft doesn’t offer a monthly pass in the country, choosing to send Ride Passes that knock off $5-$10 per ride for a limited time.

Unlike other parts of the world, North American cities haven’t embraced congestion pricing to reduce traffic, though New York looks to be the first. The plan calls for a cordon-free zone south of 60th St. in Manhattan and is set to start in 2021.

A new voice

Waze’s default voice for the app’s spoken turn-by-turn directions in Canada and the U.S. is known as “Jane.” New voices were up for a vote at the meetup, as Waze tries to bring in a little more personality to Jane’s otherwise monotonous prose.

Waze has experimented with celebrity voices, like Morgan Freeman, DJ Khaled and the Cookie Monster, and more may be in the works, though no one at Waze confirmed anyone. With the votes in, beta testers will get the chance to use them first before a final call is made to make them public.

There’s also the option to record your own voice, though it’s only in English and won’t include street names. You can also share it with friends or relatives. Nothing new was announced for that feature, either.

Moving ahead

Rumours of Waze’s demise in favour of Google Maps appear to have been exaggerated. Google is letting it ride on its own, and more importantly, leaving the crowdsourcing element driving it firmly intact. Google Maps once allowed for map editing until that stopped in 2017.

The real question is how much more effective the app can get in routing through traffic. Gridlock is only getting worse, and solutions discussed at the meetup haven’t been fully realized yet. Waze’s entire premise is predicated on navigating people in vehicles, yet part of the focus going forward is to reduce the number of cars on the road. Tighter collaboration with public and private sector partners looks to be ramping up.

Other than advertising on the map and Google’s coffers, it’s also not clear what other revenue streams are available to it. Augmented reality (AR) is coming to the map in some form in the coming years, raising the potential for more partnerships. So is the possibility Waze could integrate payments where vehicles are involved, like at gas stations and drive-thrus, for instance.

What is evident is that Waze wouldn’t be what it is without the large number of volunteers maintaining the communities supporting its development. It’s an unusual situation that seemingly works for both sides. Then there are the drivers themselves, whose input, small as it may be on an average drive, pools together to make these two sides come together.

The post A closer look at how Waze develops its maps appeared first on MobileSyrup.

21 Feb 02:43

Scale of Bloomberg net worth

by Nathan Yau

While we’re on the topic of Mike Bloomberg’s money, here’s another view from Mother Jones:

I guess he’s rich.

Tags: Bloomberg, Mother Jones, net worth, scale

21 Feb 02:43

Quoting Juliette Cezzar

So next time someone is giving you feedback about something you made, think to yourself that to win means getting two or three insights, ideas, or suggestions that you are excited about, and that you couldn’t think up on your own.

Juliette Cezzar