Shared posts

01 Aug 01:31

The Default Settings On An Expensive Camera

by Richard Millington

A relative once bought a very expensive camera.

However, she found the number of new features confusing. So she simply used all the default settings. Her photos turned out the same as her old camera.

This happens a lot in communities too.

When you move from an inexpensive to a premium platform, you have a lot of new features and settings to fiddle with. Many (perhaps most?) organisations are nervous about making tweaks and simply use the default settings. Unsurprisingly, they don’t see the results they were expecting.

Some things to consider here:

First, don’t confuse a default setting with the best practice. The default settings aren’t the combined wisdom of the platform’s many years of experience (trust me, I checked). They’re no better than the default settings on a camera or the default ringtone on your phone. They’re simply settings that you’re expected to change to suit your needs.

Second, there isn’t much point upgrading to a new platform if you’re not going to invest time and money learning how to use it properly. You need to set aside time in your calendar to learn how to do this. This is time-intensive but it’s the entire point. When you move to a new platform the slight change in how members can talk to one another pales in comparison with the extra new features you’re paying for.

Third, take the courses offered by the vendor, have a staging site where you can fiddle with the settings, talk to others using the same platform. Take what’s likely to be a big weakness and turn it into your biggest strength.

01 Aug 01:31

Annoying Cookie Consent – But…

by Martin

In recent months, more and more web sites greet me with super annoying pop-ups and request my consent to their use of cookies. Some even give me the option to control for which purposes cookies are used. Of course, web site owners don’t do this for fun but because of laws such as the GDPR that have come into effect in recent years. While I think that the GDPR is a good thing overall, I am particularly unhappy about this side effect.

It’s nice that one can in theory control for which purposes cookies are used. But honestly, did you ever have a closer look and actually change the ‘maximum’ settings? No? I thought so.

I suppose that most people will only see the dialog once. Not me, however, as I have configured my browser to delete all cookies when I exit. So next time I come back to the page, the ritual repeats. One the one hand this is super annoying, but on the other hand it shows that the web site does not remember me after I have closed the browser. So I’m trying to recondition myself not to be bothered by the pop-ups but rather to see it as a positive sign that the website does not link me to a previous visit and does not have a personal profile of me. That is, unless, it uses other means to track my behavior which is not yet regulated. Always the optimist.

01 Aug 01:31

The long, slow grind continues

by Chris Grey
It’s about six weeks since Boris Johnson said there was no reason why the outline of a Brexit deal couldn’t be sealed by the end of July, as he put a “tiger in the tank” of the talks. As we reach that date there is no such outline in sight (unless, of course, you count the Political Declaration he signed, which was supposed to be just that).

It’s true that Reuters reported some slightly optimistic noises from Michel Barnier, and there were some very slight indications of progress this week. Against that, the Financial Times has reported (£) that Dominic Cummings favours a very light touch approach to UK state aid regulations post-transition. If so, that would make the prospect of a trade deal very remote, as it goes to the heart of the EU’s longstanding concerns about a level playing field. It would mean, in effect, that there would be no guarantees that, in the future, the UK wouldn’t use state aid to create unfair competition with EU companies (for a detailed overview of the issues of state aid and Brexit, see this 2019 IPPR Report).

If Cummings gets his way (others, especially the Treasury, take a different view) it seems a strange hill for a Tory government to see a trade deal die on. Escaping EU state aid rules was more a feature of the Lexiter case for Brexit (and, in passing, one based on misunderstandings of those rules). But if the report is correct it can be explained in two ways. One is just that it’s the latest outing for the purist sovereignty-at-any-price dogma. The second is that it shows how far the current Tory party is moving from traditional free market ideology, both in terms of potentially supporting uncompetitive industries and in facilitating cronyism in the award of public service contracts.

In this way, a lax state aid regime can be seen as reflecting two, related, longstanding contradictions within Brexit. On the one hand, it shows the contradiction (£) between Brexit as a nationalist, protectionist project and as promoting a globalist, free trade agenda. On the other hand, whilst it is already being talked about by Brexiters in populist terms of protecting British companies and jobs, its actual use might be to enable elite decision makers to give state hand-outs to their chums. Some have suggested that this might have a particular appeal to Cummings.

As well as showing some Brexit contradictions, the state aid issue also reveals some ironies. One is that it is by no means clear what this new ‘British system’ would be, how it would relate to or differ from WTO anti-subsidy rules, and how it would operate given existing state aid commitments signed up to by the government in the Northern Ireland Protocol (I don’t pretend to understand these exceedingly complex issues but they are addressed in a Twitter thread from George Peretz QC – a leading legal expert in this area, commenting on the FT report mentioned above – who most certainly does). The other is that, despite the fact that a light touch system would be anathema to the EU, some Brexiters are actually suggesting that it would be a betrayal of Brexit and would be compromising with Brussels (this, in turn, reflects the way that any actually existing Brexit is seen by some as a ‘betrayal of Brexit’).

Over-promising and under-delivering

The fact that Johnson’s ‘tiger in the tank’ turned out to be a damp squib shouldn’t be a surprise. After all, in December 2016, when Foreign Secretary, he declared that 18 months would be “absolutely ample” to complete a deal with the EU. This inane boosterism is obviously partly just a matter of his general approach to politics – as has been abundantly evident in his handling of coronavirus – but it also falls into the standard Brexiter repertoire of over-promising and under-delivering. All the now infamous quotes about ‘holding all the cards’ and ‘the easiest deal in history’ are testament to that.

Increasingly, such talk seems like the way that in 1914 what became the Great War was expected to ‘all be over by Christmas’. For although the Brexiters continue to hand out metaphorical white feathers to the supposedly spineless and unpatriotic, what we now see is the long, slow grinding attrition of Brexit reality.

It’s impossible to keep up with all of it, but just this week there has been further detail on the multiple lorry sites to be built in Kent, on the lack of customs preparedness (£), on the extra costs Brexit is causing the Home Office, on the lack of preparedness for the implementation of the Northern Ireland Protocol, on the impact on bank relocations, on the problem of how to manage the Eurotunnel, on the consequences for Gibraltar, on the uncertainty over how Rules of Origin for UK-EU trade will work, and on the impact of Brexit on biodiversity. Meanwhile, a new study from the LSE’s Centre for Economic Performance shows the double shock that businesses are going experience when the effects of ending the transition period are overlaid on those of coronavirus, with the sectors relatively unaffected by the virus worst hit by Brexit.

Just from this short list of some of the most recent reports can be seen the enormous range of problems and damage that Brexit is throwing up. It is, to continue the Great War analogy, like reading the daily casualty lists. And just as it seems incredible, now, that there was not more public outcry as those casualties mounted so, too, is it incredible that there is apparently still so little public concern about what Brexit is doing to this country.

Of course coronavirus is a big reason for that since, understandably, it has first call on media time and people’s attention. Another is that the actual impact of many of these things has yet to be felt by the public.  A third is the Labour Party’s decision to remain almost entirely silent about Brexit. That is tactically understandable both because of coronavirus and also because revisiting the Brexit battles would be a gift to the Conservatives. At the same time, it is concerning that it is such a taboo topic. The things just mentioned – and many others – are happening and will continue to happen, and will affect almost every aspect of people’s lives after the end of the transition. To say that is not, in fact, to revisit the Brexit battles but to recognize that we are now on a new terrain, that of Brexit effects.

Public unconcern

The apparent lack of widespread public concern about what is happening, and what is in prospect, may reflect part of the political scientist Professor Matthew Goodwin’s explanation of the continuing popularity of Boris Johnson. In an essay this week in UnHerd, Goodwin suggests that Johnson’s appeal to voters lies in his positivity and forward-looking focus on national renewal and pride, and for this reason “they are generally willing to give him a free pass when he fumbles” on technocratic delivery. They are not “standing behind him because of what Michael Oakeshott called the politics of pragmatism – they do not see the world as the declinists see it, as merely an exercise in performance management”.

Johnson’s positivity contrasts with those Goodwin calls the ‘declinists’ who see Brexit, in particular, as damaging to the international standing and economic well-being of Britain. These “highly-educated” declinists are “Left-wing or liberal”, “gloomy”, “condescending”, often “narcissistic”, “vicious” and associated with “technocracy” and the “elite-driven Remain campaign” (Goodwin is a scholar of populist politics, but it is hard to resist the impression that he has come somewhat to identify with the object of his studies). Their writings, he says, are to be found in the New York Times, the Guardian and the Financial Times (and I venture to suggest that this lowly blog could be seen as a very minor example).

Goodwin is clearly right that Johnson appeals to a political constituency that spans “traditional ‘true-blue’ Tories and instinctively socially conservative blue-collar workers”, and in which patriotism and national identity play a key role, in the way that Margaret Thatcher did in the 1980s. But it’s over 40 years since Thatcher’s first electoral victory and what is missing in the comparison is the new distinction between truth-based and post-truth politics. It’s the latter which, distinctively, Johnson embodies – in a way that, irrespective of one’s views of her, could never have been said of Thatcher - and, whilst there can be no doubting its appeal to many voters it’s a major error to ignore its nature and its limits.

For the case against Brexit isn’t declinist so much as it is evidence-based, and it is that which marks it out from all the empty promises and false predictions of Brexiters. That isn’t to say that the evidence speaks for itself – there are always different interpretations, as well as selections, of evidence – but that it is within the terrain of evidential debate and analysis that the anti-Brexit case is made. The opposite to this is not, as Goodwin (again drawing on the Conservative political philosopher Michael Oakeshott) has it, the “politics of faith”. It’s the politics of post-truth.

Brexit and post-truth politics

It’s important to understand that post-truth politics is not – or not simply – about lies which, after all, have long been part of politics. Rather, it is about blurring the distinction between truth and lies altogether, and positing that, at best, there are alternative truths. That then takes on a particular twist within populism, with the suggestion that, amongst those alternative truths, there are some which have the special status of being ‘the people’s truth’.

That is obvious in the way that Brexiters routinely wheel out claims – about WTO terms, the interests of German car makers, technological solutions for the Irish border, GATT Article XXIV etc – which don’t have a sound evidence base but which can appear plausible. It’s on view every time they extol the benefits of an ‘independent trade policy’. For the most part, then, they sound as if they are offering a ‘politics of pragmatism’ and, even, a ‘technocratic’ programme. It’s certainly the case that there is a faith-based element of Brexit – that true belief is needed to make it work – and that has had a very important impact on politics and the civil service. However, Brexiters don’t appeal simply, or even primarily, to faith but to alternative, sometimes partial, truths, the repetition of which serves to suggest that the claims propounded by remainers are just an artefact of their elitism.

Goodwin himself says pretty much the same thing in his suggestion that declinists “seldom remain in the world of objective reality … [and] … are neither able to see the world in a balanced way nor in a way that most ordinary voters see it”. The key word here, undoubtedly, is “ordinary”, with its suggestion that these voters – despite being, even on Johnson’s polling, a minority - are the ‘real people’, with a more balanced, more objective or perhaps in some way more authentic view than that of the remainer or declinist elites. That they are ‘ordinary’ makes them, paradoxically, ‘special’.

The same logic was evident in Michael Gove’s now infamous line about people having had enough of experts, which is often quoted but rarely in its full context. He made the remark when confronted with a whole list of organizations which had counselled against Brexit, but then went on to say that instead of relying on these experts he “had faith in the British people to make the right decision”. The experts, he suggested, gave their warnings because they were “people who had done very well thank you out of the EU”, thus, implicitly, tainting ‘their’ truth as that of the elite rather than of the people.

The point is that the invocation of the politics of faith only starts when the attempts to couch the argument in terms of the politics of pragmatism fail. Indeed, Brexiters are delighted to cite experts approvingly – Professor Patrick Minford or Shanker Singham, for example – when they make the case for Brexit. It is only when confronted with the vast bulk of expert opinion to the contrary that expertise is derided.

Yet, accepting that there are always differences of interpretation, truth does have its own validity and each and every truth claim made by Brexiters is gradually being demolished. Eighteen months was not “absolutely ample” to do a deal, German car makers didn’t have the influence it was claimed they would, trade deals haven’t all been rolled over, the European Medical Agency and European Banking Authority didn’t stay in the UK (£), Turkey isn’t joining the EU, GATT Article XXIV doesn’t have the meaning they claimed it did, the Irish border isn’t unaffected by Brexit, and the technological solutions to it do not, at least as yet, exist. In fact, I can’t think of a single claim made by the Brexiters that has so far proved true. Again, it’s only at this point, when the limitations of post-truth politics become unavoidable, that Brexit becomes cast as the ‘politics of faith’.

The dangers of populism

The dangers of this are obvious, for they are the dangers of, precisely, populism. As soon as politics invokes ‘the people’ as a unitary, undifferentiated entity it immediately stigmatises those who disagree as not being of the people – and at the extreme as not being people at all. Even the more benign sounding term ‘ordinary voters’ carries the implication that those they are differentiated from are in some way less real, authentic or normal, voters.

These dangers become particularly acute – and the categories particularly untenable – in the context of a highly divided population. For what can it really mean in relation to Brexit, or to Johnson’s support more generally, to take a population split more or less 50/50 and describe only one group as ‘ordinary’ or as ‘the people’? Indeed, much of the damage of Brexit has been to take a very close result as mandating the least consensual or compromising form of Brexit to be enacted as ‘the will of the people’.

This quasi-mythological invocation of ‘the people’ contributes to forgetting that Brexit was sold to voters not on a faith prospectus but on the basis of post-truth claims that purported to be a pragmatic programme. This is very much the point I made in last week’s post, about how it is only since that it has been claimed that ‘it was never about the money’. Without reprising that argument, consider just the Gove ‘experts’ interview linked to above. There he also makes the explicitly economic case that EU membership meant that British people suffered “lower wages” and had their “access to public services restricted” with the implication that, after leaving the EU, these things would improve.

So how will leave voters react as these and all the other claims become falsified? They may well, for the reasons Goodwin suggests, continue to support Johnson and, for that matter, Brexit on the basis of faith. I’m not so sure of that for two reasons. One is just the general observation that as unemployment rises government voter support tends to fall. The other is the particular nature of Brexit in that, however it is delivered, some or many leavers will regard it as a betrayal. Indeed, if Keir Starmer is smart then he will be able to make much of Johnson having ‘bungled’ Brexit.

However, if voters don’t blame Johnson then who will they blame? As can already be seen, it will be the declinists or, as they are more usually called, the remainer saboteurs – those who are not ‘the people’ - as if they, rather than the Brexiters, were responsible for the promises made but not fulfilled. At that point, politics will turn very ugly indeed.

Or an anti-climax?

But there is another, less alarming but in its own way depressing, scenario, which increasingly seems to me to be quite likely. It’s that a minimal kind of deal is done that at least avoids the most immediate, calamitous effects at the end of transition. The other adverse effects will come in a slow, undramatic way and attract relatively little attention. The economic, cultural and geo-political impoverishment will be locked in but also normalised. Bits and pieces of the relationship will get patched and fudged over the coming years - possibly for the better, possibly not.

If so, the whole Brexit saga will fizzle out with an anticlimactic series of whimpers rather than a bang, leaving both remainers and leavers feeling disgruntled, irritated and cheated. Perhaps that would be appropriate to the long, slow, grinding attrition it has turned into. Whether envisaging such a scenario should be seen as optimistic or pessimistic I am not sure.

31 Jul 03:17

Wanted: A Notation for ‘Why’

by Greg Wilson

We’ve been working on the next version of TidyBlocks, which means I’ve been programming again after a hiatus of several months. I enjoy it, much as I imagine my mother enjoyed knitting, but the older I get, the more frustrated I am by our tools. For example, the Program class has a Map called waiting from sets of strings to runnable pipelines, each of which can depend on zero or more other pipelines. As a pipeline runs, it can call Program.notify(label) to say, “Tell anything that depends on label that the result it needs to run is now available.” Once all of a pipeline’s dependencies are ready, it is removed from waiting and added to the runnable queue (called queue). The code looks like this:

/**
 * Notify the manager that a named pipeline has finished running.
 * This enqueues pipeline functions to run if their dependencies are satisfied.
 * @param {string} label Name of the result that was just produced.
 */
notify (label) {
  util.check(label && (typeof label === 'string'),
             `Cannot notify with empty label`)
  util.check(this.env instanceof Env,
             `Program must have non-null environment when notifying`)
  const toRemove = []
  this.waiting.forEach((dependencies, pipeline) => {
    dependencies.delete(label)
    if (dependencies.size === 0) {
      this.queue.push(pipeline)
      toRemove.push(pipeline)
    }
  })
  toRemove.forEach(pipeline => this.waiting.delete(pipeline))
}

But why does this method build a list called toRemove while it’s checking dependencies, then go back and remove the pipelines in that list from waiting? Why doesn’t it just called this.waiting.delete(pipeline) inside the first forEach? The reason is that it’s not safe to remove items from a Map while iterating over it: doing so can result in items being missed or in the wrong item being removed.

That’s not what annoys me, though. What annoys me is that I don’t have any better way to explain why two forEaches are needed than a sentence or two of plain text. Modeling notations like UML can show what I’m doing (sort of), and with enough work I could write something in a rigorous specification language like TLA+ that would allow the computer to warn me if I did try to delete items inside the main forEach, but neither of those is the “why” that the next human being needs.

All of which has me thinking about dplyr pipelines and about whether every verb should have an optional inOrderTo parameter so that I can write pipelines like this:

infant_hiv %>%
  filter(estimate != 0.95,
         inOrderTo="Get rid of markers used by field workers to indicate unreliable data") %>%
  filter((0.5 <= low) & (high <= 0.7),
         inOrderTo="Select estimates in the 50-70% range") %>%
  group_by(year,
           inOrderTo="Look at annual averages in this band") %>%
  summarize(ave_range = mean(high - low),
            inOrderTo="Look at how the actual range in this band changes over time")

I know I can do this with comments, but this would encourage authors to explain specific steps more precisely. The next step would be to come up with a (restricted, checkable) vocabulary like that used in Gherkin

…or something equally nebulous. I’m not a data scientist, so I don’t know exactly what would be most useful in practice, but as I help a colleague figure out why Booleans are showing up as strings, what I do know is that there has to be a better way.

31 Jul 03:17

DISCOTRAIN twitter.com/RailAlberta/st…

by wtyppod
mkalus shared this story from wtyppod on Twitter.

#tbt when Calgary ran regional rail! Anderson to about where Somerset is at 162 ave. Ran on @CanadianPacific tracks, carried about 800 people per day and with a top speed of 120kmh it covered the trip in minutes. @CalgaryMetroReg can make this happen again in the new growth plan! pic.twitter.com/Q4Xh0qIHzW







81 likes, 16 retweets



108 likes, 12 retweets
31 Jul 03:16

Twitter Favorites: [SFUResidence] @ryanmottau @SFU @ryanmottau Super heartwarming for us to hear you created this lasting friendship @SFUResidence We… https://t.co/wlaCAtuZQw

SFU Residence @SFUResidence
@ryanmottau @SFU @ryanmottau Super heartwarming for us to hear you created this lasting friendship @SFUResidence We… twitter.com/i/web/status/1…
31 Jul 03:15

Why You Need A Community Manager (and what happens if you don’t replace one)

by Richard Millington

Back in 2017, my colleague, Sarah Hawk, left her role managing the FeverBee community to take up an incredible new opportunity at Discourse.

I didn’t hire a replacement. For the past three years, the community has been left to manage itself. I’ve dipped in occasionally to respond to discussions, remove spam, make a few tweaks, but that’s about it.

This means the community has unintentionally served as a fascinating natural experiment for what happens when you don’t replace (or don’t hire) a full-time community manager.

If you’re looking to make a case about the importance of a full-time community manager, this data should help.

 

Increase In The Number of Visitors


(click here for full image)

Let’s start with the number of visitors. Things didn’t really change much at first. In fact, the number of visitors continued to increase significantly for the following year. The reason for this is simple; the community was still attracting growing levels of search traffic.

 

Increase In Search Traffic

As you can see below, the number of visitors is entirely driven by organic search traffic.

(click here for full image)

My best guess is the community was still considered ‘fresh’ by Google and the rising increase in discussions kept ranking highly in search engines.

It’s only been in the past year that traffic has really begun to decline – largely the result of formerly popular topics no longer ranking highly in search engines.

Key Takeaway: Even without a community manager you will keep attracting similar (or increased) levels of search traffic for a year or two.

 

Rapid Decline Pageviews

This is where we can see a more immediate impact of losing the community manager.

(click here for full image)

The number of pageviews (which is a fairly good indicator of how engaging people are finding the community) almost immediately dived. We might have been attracting most people, but they were finding the community a lot less engaging.

My guess is the failure to populate the community with fresh, interesting discussions, ensure previous discussions received a response, and do the day to day welcoming work is reflected in an instant decline.

 

Rapid Decline Engagement Metrics

You can see the same immediate impact in broader engagement metrics too. The number of pages (discussions) members viewed per visit plummeted and the average session duration also fell immediately.

(click here for full image)

While the number had fallen a little before the community manager left, this is probably the result of Sarah Hawk working just 2 days a week for FeverBee just prior to her departure.

Interestingly, unlike total page views, engagement metrics seem to ‘bottom out’ at around 2 to 2.5 pages per session and average session duration of about 1:30 to 2:00 minutes. This was primarily new visitors browsing around after most of the regulars had departed.

This might be an interesting benchmark for community professionals (i.e. anything above this line is attributable to the work of a community manager).

Key Takeaway: The moment a community manager leaves you can probably expect an immediate decline in engagement. This will level out at ‘rock bottom’ within about a year.

 

Community Activity

As expected, when a community manager leaves you see an immediate collapse in the level of participation in the community.

(click here for full image)

Some of this is simply the absence of Hawk’s contributions. She was making 80+ posts per month. The majority is members participating less. When she wasn’t there to proactively drive the community, the level of participation quickly declined.

Key takeaway: If a community manager leaves, you need to find a replacement quickly or you lose all the value the community has created.

 

Daily Active Users / Monthly Active Users (stickiness)

Once Hawk had left, the community also became a lot less ‘sticky’.

(click here for full image)

While the data itself is highly variable from month to month, the trend-line broadly shows a clear decline in stickiness until (again) a baseline is reached.

Members who did visit (i.e. regulars), were suddenly visiting a lot less frequently. 20% is often considered the standard to aim for here. And the FeverBee community was within touching distance of that before Hawk left.

 

Conclusions

Could these results be more conclusive?

Communities can’t run themselves. Large communities might fare slightly better than smaller communities like ours, but you still need someone at the helm.

Not replacing the community manager immediately reversed the direction of the community from one which was increasing in activity and engagement to one that went into a tailspin decline.

Note: You might notice several of these metrics were in decline just prior to the community manager’s departure. I’d attribute this to a) Sarah Hawk leaving half-way through the month of July and b) she only worked 2 days a week just prior to her departure.

If you want a community to thrive and reach a critical mass of activity, you need a full-time community manager to make that work. You can’t wait for a community to be successful and then hire one. And if your community manager leaves, you better find a replacement quickly.

31 Jul 03:06

Telus offering Amazon Prime as a benefit to customers with select Optik TV plans

by Aisha Malik

Telus announced that Optiv TV customers in B.C. and Alberta with four or more theme packs can choose Amazon Prime as a benefit in their package.

“Customers will enjoy the best in shopping and entertainment from Amazon, including direct access to the Prime Video app to watch thousands of popular movies, TV shows, and award-winning Amazon Originals like Fleabag and The Marvelous Mrs. Maisel,” the carrier noted in a press release.

A Prime membership regularly costs $7.99 CAD per month. Telus notes that Optik TV customers who add Amazon Prime to their TV package will also have unlimited access to ad-free listening through Amazon Prime Music, along with hundreds of books with Prime Reading.

Further, customers will also get free in-game loot and games with Twitch Prime and free unlimited photo storage with Amazon Photos.

Customers can access the app by navigating to the Optik TV app home page or by turning to channel 424.

Source: Telus

The post Telus offering Amazon Prime as a benefit to customers with select Optik TV plans appeared first on MobileSyrup.

31 Jul 03:05

Oversaturated

by Cameron Kunzelman
Full-text audio version of this essay.

After George Floyd was killed by police officer Derek Chauvin and video of the murder was shared widely, a wave of protests began in Minneapolis, where the incident occurred, as well as in most other metropolitan areas in the U.S. and many smaller cities and towns. The violent police response to these events has generated footage of their tear gassing, pepper spraying, brutalizing, and beating American civilians with batons, much of which has circulated online and reinforced the protests’ urgency.

On June 8, U.S. House and Senate Democrats responded to this popular uprising. Wearing kente cloth draped around their necks and flanked by members of the Congressional Black Caucus, Democratic leaders Nancy Pelosi and Chuck Schumer walked to Emancipation Hall in the Capitol and kneeled for eight minutes and 46 seconds to honor Floyd, mimicking a common protest action. Images and videos of this event, which was staged to coordinate with the introduction of a police-reform bill, also proliferated across social media and news sites, but to a very different effect.

Saturation describes how institutions foreground race as a substitute for more substantive change, and how individuals can have their racialization foregrounded by those institutions

When protesters kneel, it evokes solidarity, making the visual point that all these people, together, are collectively doing something. They are united in action and idea. They are a bloc. But the kneeling elected officials did not evoke feelings of collectivity but instead, as Doreen St. Félix wrote, “embarrassment.” They are not part of a bloc or a mass. They are not in the same structural position as the protesters rising against injustice; they are emanations of the state apparatus that perpetuates it. Much like the police who have sought to kneel with protesters, the politicians’ gesture is a show not of solidarity but of power — that they can appropriate protests for publicity stunts without refusing, let alone altering, their structural relation to racist oppression. A sitting member of Congress kneeling is closer to replicating Derek Chauvin than Colin Kaepernick.

Why would the Democrats choose such a strange stunt to pair with their bill proposal? Why didn’t they recognize how out of step this was with the original use and intent of the imagery that they were replicating? An explanation can be found in the concept of “saturation,” explored in a new collection of essays, Saturation: Race, Art, and the Circulation of Value, edited by C. Riley Snorton and Hentyle Yapp. The concept helps explain how institutions of power can become more accommodating to racial critique and racialized peoples without fundamentally transforming into anti-racist bodies.

Images like the politicians’ publicity stunt are schematics of institutional power; they make “saturation” visible. Saturation describes both how institutions can foreground race as a substitute for more substantive change and how individuals can have their racialization foregrounded by those institutions. It speaks to both what systems of racialization do to people and how institutions sustain those systems. Saturation points to the psychic load racialized people have to take on to make a place for themselves within saturated institutions built through legacies of discrimination.

In Snorton and Yapp’s introductory essay, they define saturation by way of Isaac Newton’s optics theory, “in terms of the intensity of a color, expressed as the degree to which it differs from white.” For Snorton and Yapp, this definition parallels “an understanding of race as peripheral to whiteness in Western thought.” This means that saturation can be understood as a measure of intensity and as a way of thinking about proximity to whiteness and the social conditions that it creates. Rather than diagnose racism in binary terms — with something simply being racist or not — it can be thought in terms of saturation levels. For example, artist Candice Lin’s “The Land of Milk and Blood,” an essay included in the collection, does not simply identify the production of colonial and imperialist goods like sugar, opium, and silver as part of a racist assemblage but instead identifies them with the history of dyes and porcelain, showing how those artists’ materials are saturated with the culture of extraction that surrounds them. In this sense, we are talking about an entire system of racialization that extends into the very materials of life, and not simply individual actions or even an implicit ideology.

The capacity for saturation exists at every instance when a racialized person interacts with an institution, and it happens on both sides of that encounter. “Racialized subjects,” Snorton and Yapp argue, are often saturated with the demands from institutions to “represent their minority position” and thereby “become oversaturated by navigating and existing within the institutions [they] invest in and then critique.” To be oversaturated as a person, in their terms, is to be both bogged down and pulled apart by the demands of institutions that want individuals to be emblematic of their racial identity. At the same time, an institution can also become more saturated in the sense that increases in nonwhite representation within them may still leave structural problems fundamentally untouched.

To be oversaturated as a person is to be bogged down and pulled apart by the demand to be emblematic of their racial identity

Thinking in terms of saturation rather than overfamiliar heuristics of a whiteness-centering fragility or a sociological analysis removes us slightly from the now familiar (and institutionally embraced) language of diversity that rarely thinks about the mental health of so-called diverse people, whether they are actually included in significant decisions, and whether an institution can retain those perspectives without psychically draining them. It allows us to think both about what happens to people when they interact with larger structures, and it gives us language for how institutions fail to shift in longitudinal ways. Put slightly comically by the editors, in the context of art museums: “Is the answer more Jean-Michel Basquiat retrospectives to remedy the history of exclusion?”

In other words, saturation offers an analysis beyond simply saying “race impacts everything” or “everything is political.” It gives us more precise language for talking about the flexibility of the systems we live within and how they can become more accommodating without changing. A person who becomes wholly saturated by demands on their person can keep going until they collapse; an institution might do the same. The problem is that it takes a whole lot more weight to collapse an institution, and that’s why they can hold onto unchanging power in a way individuals cannot.

Structurally, the American political system has no saturation limit. It can become infinitely accommodating of diversity and yet never remove the power of institutional whiteness to demand racialized peoples prove their humanity. As the kneeling-in-kente-cloth example suggests, the Democratic party can become wholly saturated with racial consciousness and gesture toward racial equity in images and actions yet fundamentally reproduce many of the same problems they’re protesting.

To take another example, from a few days before the members of Congress kneeled, Senator Rand Paul proposed an amendment that stalled the passage of a bill that would make lynching a federal hate crime. All three current Black senators spoke against the amendment, including Cory Booker, who noted that the bill had overwhelming bipartisan support and served to repair Congress’s lack of action in the past. But he concluded his speech with remarks that suggested he was saturated — overburdened with the racialized demand that he convince the overwhelmingly white Senate of not only the necessity of anti-lynching legislation but of recognizing his own humanity: “I’ve had to explain to grown men this week that there is still hope in America, that we can make change in America, that we can grow and heal in America, that we can make this a more perfect union,” he said, and then later, “I object to this amendment. I object. I object. I object. I object on substance. I object on the law. For my heart, spirit, and every fiber of my being. I object for my ancestors.”

Booker, as a Black politician, is obliged to perform hope for America’s future even as its legacy of racialization and oppression is being brought to bear on him. He has to recognize that he’s being fundamentally devalued by the institution he has invested in (which, indeed, has a long legacy of such devaluing as part of its role in maintaining racial hierarchy). His rhetorical negotiation of the tension between his power as a Senator and his power as a Black man marks his saturation point. He has to stand up and defend a racial justice bill with the entire force of his history as a Black man against the bureaucratic “neutrality” of a procedural argument. It clearly pains him to do so, and the institution eats that pain without any regard for him as a person or his well-being. And the next time it comes up for a vote, he’ll have to do it again.


While the essays in Yapp and Snorton’s edited collection mostly focus on institutions like museums and markets, the two polarities of saturation apply just as well to conventional politics (as I have used it above) and online platforms, where we might say that a kind of “media saturation” occurs. I don’t mean this in a media-effects sense, in which a person is overwhelmed by social media or cable news, aimlessly doomscrolling. Rather, media “saturation,” in Snorton and Yapp’s sense, could be useful for understanding how racialized content and perspectives work their way through social media and sometimes determine the shape and content of conversation there.

Social media was saturated by images of the black square — revealing how much race could be foregrounded while still ultimately being sidelined

Consider the example of #BlackoutTuesday: This was a protest staged primarily on Instagram as a way of drawing attention to the Black Lives Matter movement. The action was simple: instead of your normal Instagram content, you post a black square. To many of the mostly well-meaning participants, it may have seemed like a low-stakes way to promote solidarity with what they saw as an important cause. But if this New York Times roundtable can be taken as representative, the reaction among Black Americans was to see the black square as pushing out other useful information about protests or voting while allowing white people with little investment in the movement to post and move on. It also allowed brands to posture their solidarity without backing it up with action. As Jasmine Howard put it in the roundtable: “So much wrong and you all think one black square makes you Malcolm X.”

Instagram can be said to have been saturated by images of the black square — revealing how much race could be foregrounded while still ultimately being sidelined. Like the American political system, it demonstrated a capacity to absorb these gestures seemingly infinitely, with business starting up as usual again the next day. The moment of racial recognition, of representation of a political movement, was taken up by the technological platform and then treated as any other piece of content or meme by the white power-holders on the platform. It is another kind of content, another Basquiat exhibition at the museum. It comes and goes without transforming the racialized structure of the space, allowing the white-aligned institution to pat itself on the back for accomplishing a modicum of diversity.

At the same time, the particular psychic stress that the roundtable discussants work through in the New York Times piece shows how saturation works on people. Much of the critique of the black square emerges from the recognition that it, alone, means very little. As Howard notes in the discussion, the black-square posts were attended by her white friends’ increased demands for attention and approval. As the platform was saturated with black squares, certain people deemed to be “representative” are saturated with racialization, required to bear witness to others’ “good work.”

For Snorton and Yapp, the concept of saturation is a way of wrangling with power and capitalism. It “provides a rubric to ask different questions and to push us to demand more from the ways that institutions function and how race has come to be imagined and understood.” Reading the collection during the current moment, in the context of the examples above, I am struck by how it can function as much like a crowbar as a rubric. After all, the relationship between Congressional stunts and Booker’s feeling compelled to evoke his own humanity is complicated. He is both complicit in a system of power and domination and a Black man in America. Likewise, Instagram is a visual ecology that supports (and is supported by) both empty gestures and footage of protests and advocacy at the same time. “Saturation” gives us a way of talking about when change does or does not happen, and why some people particularly are expected to bear the psychic burdens of that. While it doesn’t provide tactical pathways or specific methods to get out of our current position, the concept of saturation does give us a new way of thinking about systemic pressure points. And, maybe, new ways of applying pressure.


Since I began writing this piece a month ago, we went from a media ecology where protests were everywhere to one in which they are barely shown. They’re still happening, of course. People are in the streets in masses. They’re pulling down statues. They’re making their voices heard in their local communities. And yet news channels and feeds slowed that visual information down to the smallest trickle. We have gone from mass media panic about uprisings in Minneapolis, New York, Los Angeles, and Seattle’s Capitol Hill Organized Protests to short, staccato segments presented in the context of governmental response. In the past two weeks, many of those segments have focused on the use of federal agencies to abduct protesters off the streets in Portland and, presumably, other cities.

To continue beyond “saturation” would threaten the white-aligned power core of media institutions’ message and economic position

As social media fervor accelerated around this practice, Trump doubled down and claimed it as a positive federal program for getting unruly cities back under the control of law and order. This explicitly authoritarian act was covered uncritically by established media organizations; pundits debated the moral and legal dimensions of these actions while the President’s comments and speeches were consistently aired.

It seems clear that media outlets felt some kind of pressure to downplay the nationwide protests and their calls for action. Saturation can help us frame this: These media institutions can absorb Black Lives Matter messages, calls for police abolition, and radical movements in the streets only up to a point. They hit their saturation point, the absorption limit for these ideas, and then they stopped. To continue would threaten the white-aligned power core of their message and economic position.

But these institutions were also clearly hungry for a way to both-sides the issue, and they have shown themselves willing to make brute authoritarianism a “side” to be debated, and to profit from the “engagement” that leads to. While there might be a tipping point for media saturation, or for saturation in general, I don’t have a sense of what it might be if institutions can always re-up their structural alignment with whiteness. This choice to turn away from positive coverage of the protests and reorient toward the authoritarian interventions is worrying because it shows the extreme investment that the liberal order has in maintaining the racialized power structures currently in place.

But I want to use the concept of saturation to read hope here. If these media entities could absorb and digest protest infinitely, then they would be continuing to monetize them. It required the weight of the U.S. executive branch to sway them back into a moneymaking-through-debate logic, and that swing of power is more significant than it might seem at first glance. The fact that media organizations have to pretend that the only thing worth covering about protests are their brutal suppression suggests that the ideas represented in those protests truly have revolutionary potential — that our institutions could be threatened or overturned via the media landscape. The past couple months could, in part, be a template for thinking oversaturation as a political strategy.

The cost of saturation for racialized people is high, and that cost is paid during encounters with institutions bound up in whiteness and power. Snorton and Yapp open a door to a political escape hatch, though, by suggesting we might be able to short-circuit those very institutions if we can remove their methods for managing how saturated they are. That obviously requires the dismantling of the parasitic relationship that news, their ratings, and their profits have with our political structures.

I think of the demonstrations at the CNN Center in Atlanta. The things that were happening in the screens, in all the screens, were happening right outside the headquarters of the mechanism that produces the content on those screens. The media apparatus could not ignore the groundswell of Black and brown power that existed right outside its door, and it had no choice but to look on and broadcast that clashing moment. While moments of saturation are happening constantly, moments like that may offer us a path toward a future where they can be deployed strategically.

31 Jul 03:01

The Steepest Street in San Francisco

by Gordon Price

From sfgate via Dianna:

San Francisco is defined by its hills.

But which is the steepest?

It’s not Lombard Street. While the famous winding block between Hyde and Leavenworth, with its tight turns and postcard views, has become the celebrity of San Francisco streets, its incline even before the eight switchbacks were built in 1923 was a relatively paltry 27%. It may be the crookedest and most famous block in the city, but it’s certainly not the steepest. …

YouTuber and San Francisco native Joey Yee wanted to find out, and climbed the city’s actual steepest streets in a video posted on YouTube:

So, what is he grade of Vancouver’s steepest street?

31 Jul 02:55

Learning to See-Improving Knowledge Work Capabilities

by Jim

My wife is a photographer. Quite a good one, in fact. One sure way to annoy her is to ask what kind of camera she uses after admiring one of her photos.. It’s her eye, not the camera, that recognizes the perfect shot. The tool may well be the least important element in the mix.

My own photography has gotten better courtesy of time spent in apprentice mode by her side. Photography is also an example of a knowledge work capability that can shed light on performance improvement in a knowledge context. The primary performance metric is whether you can capture the image you envision. Secondary metrics might include meeting time, budget, and other constraints on the image. In some settings, you may also need to be able to articulate the logic for why the image you eventually capture meets the criteria set.

If your goal, for example, is to capture a simple selfie to demonstrate that you were there at Mt. Rushmore, anything with both you and the mountain in frame and in focus will suffice. As you goals evolve, you also acquire new concepts and vocabulary; composition, depth of field, light conditions, focal length. exposure.

Meeting those goals may lead you to exploring and adopting new tools. A better camera might well enable you to capture images that weren’t possible with starter tools. But the functions and features of more sophisticated tools might just as well not exist if you don’t have the corresponding concepts to work with.

These concepts and the tools all need to be in service to creating the images you imagine. You don’t learn them in theory or in isolation. You learn them by doing the work and getting feedback. Over time, you also learn to give yourself better feedback.

Ira Glass has an excellent series of short videos on storytelling that fit here and fit knowledge work in general. The whole series is worth your time and attention–Ira Glass on Storytelling – This American Life. The nut graf, however, is something to keep close at hand as you work at your craft:

Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work. Put yourself on a deadline so that every week you will finish one story. It is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions. And I took longer to figure out how to do this than anyone I’ve ever met. It’s gonna take awhile. It’s normal to take awhile. You’ve just gotta fight your way through.

There is craft behind all knowledge work. You get better at craft work by being intentional about getting better. And by accepting that craft lies in the mix of tools, techniques, practices, mentors, and peers. It’s a mistake to remain wed to the first tool you pick up, It’s equally a mistake to confuse changing tools with improving your craft.

The post Learning to See-Improving Knowledge Work Capabilities appeared first on McGee's Musings.

31 Jul 02:55

to good effect

to good effect

Towards the end of the so that it may be remembered blog post I wrote:

Android devices even have public APIs for developers to use host card emulation (HCE) which allows the device to be programmed to act as though it were an NFC tag. This is widely understood to be how Apple Pay works, for example, but it is functionality only available to applications that Apple controls. HCE in a museum context offers a number of interesting opportunities. I probably wouldn't recommend installing a fleet of Android phones in the walls and gallery cases of a museum but, for a unit cost of 100$, there are a number of places where a tiny computer with a screen, an internet connection, sophisiticated graphics and rendering capabilities and the ability to programatically broadcast itself (HCE) would be ripe with possibility.

In both the bring your own pen device and so that it may be remembered blog posts I also wrote:

The phrase "NFC tag" should be understood to mean "any equivalent technology". Everything I've described could be implemented using QR codes and camera-based applications. QR codes introduce their own aesthetic and design challenges given the limited space on museum wall labels but when weighed against the cost and complexity of deploying embedded NFC tags they might be a better choice for some museums.

Take SFO Museum, for example, where most of the wall labels are mounted on slanted plates behind glass. Reading NFC tags can be tricky even when you can touch them. NFC tags that are six to ten centimeters away behind glass and positioned at an angle from that glass are basically invisible to an NFC reader.

After those three blog posts I wrote about working with e-ink displays and Raspberry Pi computers. In that blog post, titled just how easy, I said:

The Raspberry Pi Zero ships without any network interfaces (ethernet or WiFi) and requires that you solder on the connectors for the (e-ink) display HAT to plug in to but for 14$ you can buy a Raspberry Pi Zero HW which comes with a pre-soldered connector and both WiFi and Bluetooth chips.

There is also the Raspberry Pi Zero W which is the same as a the HW but without a pre-soldered connector. It retails for 10$ What I didn't say in that blog post but what I've been thinking about for a little while now is: What if the ID of each object was broadcast as a Bluetooth Low Energy (BLE) advertisement as it was being displayed?

This isn't a new idea. It's what Bluetooth beacons are designed to do. By and large, I have not been a huge fan of either Bluetooth or beacons in a museum setting. Bluetooth is a complicated technology that is made more difficult by the complexities of working with radio frequencies in uncontrolled environments. Bluetooth opens the door to whole universe of security concerns. Bluetooth beacons have, so far, been notoriously expensive and challenging to program, update and maintain.

photograph: Pan American Airways, radio operators. Photograph. Gift of Jon E. Krupnick, SFO Museum Collection. 2008.056.1115.

It is possible to use beacons to good effect but never without a dedicated, costly and concerted long-term effort. Matt Tarr's Location, location, location! The proliferation of indoor positioning and what it means and doesn’t mean for museums paper about the American Museum of Natural History's (AMNH) Explorer application and Shelley Bernstein's The Realities of Installing iBeacon to Scale blog post about the experience at Brooklyn Museum are good discussions of the benefits and the challenges of deploying Bluetooth beacons in a museum.

The cost to develop AMNH's Explorer application, inclusive of the 800 beacons installed in the museum and the native mobile applications that were built to use them, is said to have been about $2.75 million. As a practical matter this makes it an approach far outside the reach of most museums or cultural heritage organizations.

In the just how easy blog post, writing about the use of Raspberry Pi computers, I said:

It is the combination of pluggable (low-cost) hardware and high-level expressive programming languages for operating that hardware and being able to do so in a traditional Unix operating system that makes all of this exciting.

The Raspberry Pi Zero W is a 10$ computer with a 1GHz CPU, 512MB of RAM with the ability to connect to the internet over Wifi, the ability to communicate over Bluetooth and broadcast over Bluetooth Low Energy (BLE) that runs a modern (and updated) Unix operating system with support for most programming languages in use today. The principal drawback, compared to commercial BLE beacons, is that they require a power source which means that what might have begun as a technology problem is also very much an operations and a physical architecture problem.

Being a general purpose computer the Zero doesn't come with any default tooling for configuring itself as a BLE beacon so developing that software becomes an additional burden. Greg Turner's An Internet-of-Things strategy for ACMI blog post about how they are maintinaing a fleet of 800–1000 Raspberry Pi computers that have been installed in the galleries at the Australian Centre for the Moving Image is a good example of how the configuration challenge can be managed.

There are tradeoffs with any technology choice and they are influenced by the circumstances in each institution. As much as I might like to install NFC tags in all the gallery spaces at SFO Museum the choices that were made to meet the demands of exhibiting objects in a busy airport make NFC tags impractical. Could AMNH install 800 Raspberry Pi Zero W computers in discrete locations throughout their galleries in a way that visitors wouldn't see them? Perhaps but it's easy to imagine scenarios where that would be difficult or impossible.

One advantage that standalone and battery powered BLE beacons have is that they can be easily moved to another location. In order to do the same with a Raspberry Pi and institution would need to also design and build a modular approach for running dozens of power sources to different locations both on the floor and in the walls in any given gallery. That's not something most museums can do, easily, right now. This difficulty suggests it is something that any museum looking to do renovations should make a priority.

I haven't been successful in programming a Raspberry Pi Zero to act as a BLE beacon yet. I am confident it's possible and the documentation appears to be meticulous and detailed; the official Bluetooth 4.0 specification is 3,256 pages long, not including subsequent addenda. Perversely, that specificity absent any simple I just want to... cookbook-style examples serves as a barrier for what is still a mornings-and-weekends project.

So far I have been dependent on other people's software implementations of the Bluetooth specification and I have only been able to make one of three packages, Paypal's gatt library written in Go, actually broadcast anything. Unfortunately while gatt is able to broadcast messages that can be seen by an iOS device that device doesn't think the messages contain any data. The gatt codebase hasn't been updated in five years so it is unclear whether the problem is that it doesn't conform to the current Bluetooth specification or something else. Maybe when I've had a chance to read the 400 pages outlining the Low Energy Controller in the documentation I will better understand what's going on.

In the interest of testing my original idea — What if the ID of each object was broadcast as a Bluetooth Low Energy (BLE) advertisement as it was being displayed? — I opted instead to teach the wunderkammer application, first described in the bring your own pen device blog post, not only to scan for BLE advertisements but to broadcast them as well.

Here are some screenshots and a video to demonstrate what that looks like:

There are now two broadcast buttons in the wunderkammer navigation bar. The original antenna.radiowaves.left.and.right icon used to scan NFC tags is now used to start and stop BLE advertising (broadcasting) in the application. The tag-scanning activity is now represented by the badge.plus.radiowaves.right icon which has been flipped horizontally to better signal the idea that scanning happens outside the device. In this screenshot the broadcasting button is disabled because the device's Bluetooth functionality has been turned off.

This is what things look like when Bluetooth is enabled. If a device hasn't enabled Bluetooth or doesn't support scanning NFC tags both buttons would be disabled.

When the broadcasting button is pressed it turns red to signal that it is advertising the object ID that the wunderkammer application is displaying.

The scan button has been updated to present a modal dialog when the device is able to scan for object tags using different technologies.

Here's a video of two iOS devices, each with a copy of the wunderkammer application, where the device on the left is broadcasting BLE advertisements of the object it is showing and the device on the right is scanning for those advertisements and using the data contained in them to show the same object.

As with NFC tags the device on the left is broadcasting an object ID like sfom://id/12345 and the device on the right is resolving that identifier in to a fully qualified URL that it can use to retrieve an object image and metadata from the internet or a local database.

Unlike NFC tags the device on the left updates the data it is broadcasting (the object identifier) when the object it is displaying changes and the device on the right continues to listen for those updates. This feels like a significantly different and novel interaction model, from what's currently available to visitors, that it could be used to good effect in a museum setting.

In both cases, encoding data in NFC tags or broadcasting it as BLE advertisements, there still needs to be an application that consumes the data and does something with it. Development and maintenance costs, as well as the awareness and adoption, of bespoke applications to work with these signals continue to be challenges that the cultural heritage sector struggles with.

Shelley Bernstein's Building is easy, but launching is hard and Sara Devine's Messaging is Harder blog posts about raising awareness, adoption and use of the Ask Brooklyn mobile device at Brooklyn Museum illustrate the problems well. It's generally worth reading anything on the Brooklyn Museum tech blog but especially any post written about the Ask Brooklyn initiative.

There is also Seb Chan and Luke Dearnley's 2012 Museums and the Web (MW) paper Using QR codes, mobile apps and wifi tracking data to understand visitor behaviour in exhibitions about the difficulties, and ultimately failure, of getting visitors to download an application once they've set foot in the museum. So far I have only been able to find a placeholder for the paper on the MW website but not the actual content of the paper itself. Maybe when Seb or Luke have a chance to read this they can point me to the paper or their slides.

I don't have a simple answer to these problems but earlier this year in the Geotagging Photos at SFO Museum, Part 1 – Setting the Stage blog post I wrote:

The larger motivation for doing things this way is to ... address an evergreen subject in the cultural heritage sector: The lack of common software tools, and the challenges of integrating and maintaining those infrastructures, across multiple institutions.

My own feeling is that many past efforts have failed because they tried to do too much. It’s a bit of a simplification but a helpful way to think about these tools is to imagine they consist of three steps:

  • Data comes from somewhere. For example, an image comes out of a database and an asset management system.
  • Something happens to that data. For example, an image is geotagged.
  • The data then goes somewhere else. For example, the geotagging data goes in to a database.

The problem with many tools, developed by and for the sector, has been that they spend a lot of time and effort to abstract the first and the last points and ultimately fail. The nuances, details and limitations, not to mention the vast inequalities, in all the different technical infrastructures within the cultural heritage sector are legion. Developing an abstraction layer for the retrieval and publishing of cultural heritage materials that attempts to integrate and interface directly with an institution’s technical scaffolding is going to be a challenge at best and a fool’s errand at worst.

The cultural heritage sector needs as many ... small, focused, tools as it can produce. It needs them in the long-term to finally reach the goal of a common infrastructure that can be employed sector-wide. It needs them in the short-term to develop the skill and the practice required to make those tools successful. We need to learn how to scope the purpose of and our expectations of any single tool so that we can be generous of, and learn from, the inevitable missteps and false starts that will occur along the way.

I mention that for two reasons.

The first is to argue that confusing the value of making the objects in our collections addressable, by publishing stable and permanent identifiers be they on wall labels, NFC tags or BLE advertisements, with the difficulties of building, maintaining and promoting a museum's own application of that data is a mistake. To me the purpose of broadcasting an object's identifier, of announcing its presence on and participation with the internet, is to foster as many applications of that data as possible.

That does not, and should not, preclude an institution from having its own application and editorial viewpoint around these objects but nor should the success or failure of those endeavours be the measure by which the value of making our collections accessible in the first place is gauged. The effort to do these things is, I believe, worth it on its merits alone. The day-to-day work becomes ensuring that these efforts are done in a way, both technically and financially, that they outlast people's initial reluctance to embrace them.

The second reason is to solicit help in building, documenting and distributing the simplest and dumbest tool to make broadcasting an object identifier as a BLE advertisement on a Raspberry Pi computer possible. Version 1 could be as simple as a command-line tool that only accepts a single parameter, the object identifier to broadcast. For example:

$> ble-tool sfom://id/6789

It could be as complicated as:

$> ble-tool -service-id {CUSTOM_BLE_SERVICE} -characteristic-id {CUSTOM_BLE_CHARACTERISTIC} sfom://id/6789

The Bluetooth specification is over 3,000 pages long and outlines a symphony of functionality and possible uses none of which are necessary for this tool. Version 2 of my imagined tool should allow the object identifier to be updated dynamically, to stream notifications to a consuming device like I demonstrated in the video, but after that I don't think it needs to do anything else. By design.

This is what I mean when I talk about small, focused tools. The details of why or where object identifiers might be broadcast as BLE notifications will vary and those details should be left to individual institutions. The commonality in those applications is the act of broadcasting and those are the tools and tasks that we should make as simple and inexpensive as possible.

photograph: San Francisco International Airport (SFO), United Airlines. Photograph. Transfer, SFO Museum Collection. 2011.068.120.

It should not be a requirement to have to use a vendor specific device in order to do these things. I built this functionality in to the wunderkammer application because Apple made it easier to get things working than doing the same on a Raspberry Pi. In the end it wasn't even easy but it was still easier than the alternative. As a sector we should endeavour to fix that disparity. There are good reasons why deploying iOS devices in to a production setting makes sense but we will all benefit from being able to do the same with a 10$ Raspberry Pi computer.

I will keep working on this problem because I want it to integrate it with the wunderkammer-server tool I mentioned at the end of the just how easy blog post but I would welcome some help. In the meantime I have merged all the Bluetooth-specific code in to the main branch of the ios-wunderkammer application if anyone is curious to see how it can be done in iOS devices.

31 Jul 02:54

The Best Drip Coffee Maker

by Alex Arpaia and Liz Clayton
Three drip coffee makers plugged in and standing next to one another amongst a cup of coffee and a jar of coffee beans.

Although a cheap drip coffee maker can fulfill your basic need for a big, hot pot of coffee first thing in the morning, only a great machine will ensure that your pot consistently tastes delicious.

To start the day off on the right foot—or with the right cup—we recommend the OXO Brew 9-Cup Coffee Maker It’s fast, convenient, and easy to use, with a well-insulated carafe. Most important, it makes good coffee.

31 Jul 02:54

Apple reports $59.7 billion revenue in Q3 2020, strong iPhone revenue

by Aisha Malik
Apple logo

Apple has posted Q3 2020 revenue of $59.7 billion (roughly $80.1 billion CAD), which is an 11 percent year-over-year increase, beating analyst expectations.

The tech giant continued to see growth in its services revenue, as it reached $13.1 billion USD (roughly $17.7 billion CAD), compared to the $11.4 billion USD (roughly (roughly $15.3 billion CAD) reported in the same quarter a year ago.

The company’s services category includes the App Store, iCloud and Apple Music, as well as more recently launched platforms like Apple Arcade and Apple TV+.

iPhone revenue increased to $26.4 billion USD (about $35 billion CAD) compared to $25.9 billion USD (roughly $34 billion CAD) a year ago.

Mac revenue increased to $7 billion USD (about $9.3 billion CAD) from the $5.8 billion USD (about $7.7 billion CAD) reported a year ago. Apple also saw increases in its iPad and wearables revenue as well.

The tech giant’s most recently released iPhone is the entry-level iPhone SE (2020). Apple also released a new MacBook Air (2020) and the iPad Pro (2020).

“Apple’s record June quarter was driven by double-digit growth in both Products and Services and growth in each of our geographic segments,” said Tim Cook, Apple’s CEO, in the earnings release.

“In uncertain times, this performance is a testament to the important role our products play in our customers’ lives and to Apple’s relentless innovation. This is a challenging moment for our communities, and, from Apple’s new $100 million Racial Equity and Justice Initiative to a new commitment to be carbon neutral by 2030, we’re living the principle that what we make and do should create opportunity and leave the world better than we found it.”

Recent rumours have indicated that the tech giant’s upcoming 2020 iPhone may be delayed due to the ongoing COVID-19 pandemic.

Source: Apple

The post Apple reports $59.7 billion revenue in Q3 2020, strong iPhone revenue appeared first on MobileSyrup.

31 Jul 02:53

The Vancouver Bikennale – Six Sundays This Summer

by Gordon Price

Every two years, Vancouver has been blessed with the sculpture Biennale – a celebration of art in public space.  And Price Tags has been documenting the Biennale since 2006, when we were still producing a magazine-style documentation of urbanism in the city.

Credit goes primarily to entrepreneur and philanthropist Barrie Mowatt, who has a long and accomplished history supporting the visual arts in this city, beginning with the establishment of the Buschlen Mowatt fine art gallery in 1979, and then the Biennale in 1998.  The latter would just be a good idea or a one-off without Mowatt’s ability to deal with the astounding logistics required to organize an international exhibition of this quality – especially one that takes place in some of our most prized public spaces, the waterfront parks of Vancouver, cautiously protected by layers of discretionary approvals.

But Mowatt has been aiming to do something more than just plop down big chunks of art on goose-strewn grass (or more politely, “transforming the urban landscape into an Open Air Museum.”)  He has expanded the scope of the exhibition to transform some of our leftover urban spaces into true gathering places for community – most notably “A-Mazing Laughter” (right) at English Bay.  The art truly does change how people see and use our public spaces.

He has also found a way to unite scattered pieces into something cohesive (that ‘outdoor museum’) by sponsoring the ‘Bikennale’ – so that numerous pieces can be viewed, appreciated and comprehended in a day.  With the pandemic making a single crowded event impossible, he has adapted the Bikennale (and Walkennale) into a month-long sequence of experiences – “SIX SUNDAYS THIS SUMMER” – that take cyclists not only along a route that connects the art but also brings in past pieces, the history of particular neighbourhoods and anecdotes about us as a people.

If you like to cycle or walk, sign up for the 2020 BIKEnnale/WALKennale Six Sundays (July 26 through August 30), check out www.vbbike.ca to learn more – a great chance to get outdoors (with appropriate physical distancing) and explore the history, architecture, and culture of a neighbourhood or two.

 

 

31 Jul 02:53

Apple Q3 2020 Results - $59.7 Billion Revenue

by John Voorhees

Apple has just published its financial results for Q3 2020. The company posted revenue of $59.7 billion. Apple CEO Tim Cook said:

“Apple’s record June quarter was driven by double-digit growth in both Products and Services and growth in each of our geographic segments,” said Tim Cook, Apple’s CEO. “In uncertain times, this performance is a testament to the important role our products play in our customers’ lives and to Apple’s relentless innovation. This is a challenging moment for our communities, and, from Apple’s new $100 million Racial Equity and Justice Initiative to a new commitment to be carbon neutral by 2030, we’re living the principle that what we make and do should create opportunity and leave the world better than we found it.”

Estimates and Expectations for Q3 2020 and the Year-Ago Quarter (Q3 2019)

Due to the uncertainty caused by the global pandemic, Apple did not provide financial guidance for Q3 2020 during its last earnings call.

Going into today’s call, Yahoo Finance said that:

Much of the earnings focus, however, will be on whether Cook and company will provide guidance for fiscal Q4, which could offer a hint about any potential delays to the launch of the firm’s upcoming iPhone 12.

Despite the lack of guidance, Yahoo Finance says that Bloomberg’s research shows that analysts expect the following results:

Revenue: $52.2 billion expected versus $53.8 billion in Q3 2019.

Earnings per share: $2.06 expected versus $2.18 in Q3 2019.

Graphical Visualization

After the break, we’ve compiled a graphical visualization of Apple’s Q3 2020 financial results.

Apple Q3 2020 on Twitter


Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it’s also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it’s made in Italy.

Join Now
31 Jul 02:52

Apple says it expects this year’s new iPhones to launch ‘a few weeks later’ than usual

by Bradly Shankar
Tim Cook during iPhone 11 series reveal keynote

Apple has confirmed that it expects this year’s new iPhone models to arrive slightly later than usual.

“Last year we started selling new iPhones in late September; this year we expect supply to be available a few weeks later,” said Apple CFO Luca Maestri in a July 30th call following the release of the company’s Q3 2020 earnings report.

Maestri did not provide a specific reason for the delay. However, this confirms a Wall Street Journal report from April stating that production on the new iPhones would be pushed by about a month due to COVID-19 related supply chain disruptions. Qualcomm also alluded to an iPhone delay this week when it announced that a major 2020 5G flagship phone had been pushed back.

Further, financial services and investment firm Wedbush recently predicted that the new iPhones will indeed release in October. That said, Wedbush still expects Apple to hold a keynote in September to formally unveil the upcoming phones.

Overall, Apple is rumoured to be working on several 2020 iPhones, including 5.4-inch and 5.1-inch iPhone 12 models, as well as a Pro version with 6.1-inch and 6.7-inch options. In terms of features, the phones are reported to sport a squared-off design similar to recent iPad Pro models, 5G connectivity and a new, faster 5nm A-series processor.

Via: The Verge

The post Apple says it expects this year’s new iPhones to launch ‘a few weeks later’ than usual appeared first on MobileSyrup.

31 Jul 00:53

Mozilla’s revamped Firefox begins rolling out to stable users on Android

by Jonathan Lamont

For over a year now, Mozilla has been working on a complete redesign of its Firefox browser on Android. Dubbed ‘Fenix,’ the rework brings both significant visual changes and an under-the-hood overhaul that, at least in my testing, makes Firefox feel snappier than ever on Android.

However, after lengthy testing, Mozilla has brought its new Firefox to the stable channel with a version jump from 68 to 79. Unfortunately, this update feels too soon in many ways.

Let’s start with the good: the new design. Firefox 79 brings several things that people on the Beta and Nightly channels — and before that, the Preview channel — have enjoyed for months. That includes a bottom URL bar, which is much easier to reach on large displays. There’s a new icon that shows up in the URL bar for reader mode now as well. Firefox 79 ushers in Mozilla’s Enhanced Tracking Protection and sets it to ‘Strict’ by default. And that’s just the tip of the iceberg.

The bad, however, has proved a sticking point for several users. First and foremost is extension support. Because of the under-the-hood changes, Mozilla prioritized bringing its recommended add-ons to the new mobile Firefox first. It appears the company is off to a slow start, with only the following few extensions available:

  • uBlock Origin
  • Dark Reader
  • HTTPS Everywhere
  • Privacy Badger
  • NoScript Security Suite
  • Decentraleyes
  • Search by Image
  • YouTube High Definition
  • Privacy Possum

A quick look at the Firefox Reddit reveals several upset users criticizing Mozilla for pushing out the update without full extension support. For once, I find myself inclined to agree, even though the few extensions I use are (mostly) available on Fenix. For such an integral part of the Firefox mobile experience, I think Mozilla should have waited to push this until it had far more extensions available.

There are several other changes users have also complained about. For example, the new tab page received some flak (I’m a fan of it, but it does have some issues). The new tab switcher also has some problems (I preferred the old Fenix switcher, but either is better than what’s available on Firefox 68). Those issues are far more subjective, however.

Mozilla has yet to roll out the new Firefox to users in Canada, so if you are a heavy extension user, you may want to hold off for now.

You can check out the official changelog for Firefox 79 here.

Source: 9to5Google

The post Mozilla’s revamped Firefox begins rolling out to stable users on Android appeared first on MobileSyrup.

31 Jul 00:53

Sandboxing and Workload Isolation

Sandboxing and Workload Isolation

Fly.io run other people's code in containers, so workload isolation is a Big Deal for them. This blog post goes deep into the history of isolation and the various different approaches you can take, and fills me with confidence that the team at Fly.io know their stuff. I got to the bottom and found it had been written by Thomas Ptacek, which didn't surprise me in the slightest.

Via Hacker News

30 Jul 06:35

'Luddite Sensibilities' and the Future of Education

This is the transcript of my keynote at the Digital Pedagogy Lab this morning. Except not really. It was a "flipped" keynote, so this is more like the pre-reading for what I actually talked about. Sort of.

I have really struggled to prepare a keynote for you all. This isn't the first talk I've done since my son died. Hell, it's not even the second or third. But this one has been the hardest to write, in part because I am so close with so many of you in the DigPed community, and what I want right now is to be with you, in person, to cry with you and laugh with you and rage with you and scheme with you.

I know that we're all struggling to muddle through this crisis — or these crises, I should say: the pandemic, economic precarity, school re-opening, police violence, creeping authoritarianism. So much loss. So much death. And I know that it's probably for the best that we use digital technologies in lieu of gathering face-to-face — for school or for work or for professional development or for socializing. Plenty of folks insist that these digital tools can be used well for teaching and learning, that online education doesn't have to be inferior to offline, that offline can be pretty wretched already. (Indeed, that's likely why you all are here: to work on bettering your digital pedagogical practices with an eye to equity and justice.)

But I remain steadfast in my criticism of education technologies in almost all their forms and functions. Indeed, the problems that we've long identified with ed-tech — privacy violations, security concerns, racist algorithms, accessibility and access issues, all-male leadership teams, outsourcing, disruptive bullshittery, and so on — are still here. And I fear we are at a particularly dangerous crossroads for education because of ed-tech. The danger is not simply because of the entrepreneurial and the venture capitalist sharks circling our institutions, but also because the narratives, long foisted upon us, about the necessity of ed-tech are becoming more and more entrenched, more and more pervasive. These narratives have always tended to repress the trauma and anxiety associated with the adoption of new technologies and more broadly with the conditions, the precarity, of everyday life. These narratives want us to forget that ed-tech is, first and foremost, beholden to the ideologies of machines, efficiencies, and capitalism.

But this is ed-tech's big moment, or so we're told. And all those folks who predicted a decade or so ago that schools would all be online by 2020, that universities would all be bankrupt may just be right.

So, that's the other struggle I've had trying to prepare this keynote. What — right now — do I have to offer as a speaker? I mean, who really needs to hear me be Cassandra when we can all see the impending doom? Who wants to hear me criticize the LMS or the video-conferencing software when we're stuck with it.

But that's the message of this talk, I suppose: we're not.

We're not stuck. We don't have to surrender. We can refuse, and we should. And we should support students when they refuse. These can be little refusals — small acts of resistance, obfuscations, that come from positions of little to no power. These can be loud refusals in support of those with little to no power. We can push back, and we can demand better. It didn't have to be like this (imagine me gesturing widely), and it doesn't have to be like this (imagine me pointing at the screen).

In 1984, novelist Thomas Pynchon wrote an article asking "Is It O.K. to Be a Luddite?" Regardless of what Pynchon thinks, I'm here to tell you that yes, yes it is. Luddites are imagined as the "counterrevolutionaries" of the Industrial Revolution and as such the enemies of science and technology. The name "Luddite" is used as a pejorative to dismiss anyone who frowns at technology, anyone who's perceived to be clinging to tradition over "progress." But the Luddites have been unfairly maligned, I'd say, as this group of late 17th / early 18th century English textile workers — skilled, technical workers — were not opposed to machines despite their famed machine-smashing. What they opposed were the exploitative practices of the textile factory owners — that is, the emerging practices of capitalism. The Luddites' tactic of what historian Eric Hobsbawm called "collective bargaining by riot" was used by workers in other industries as well.

I'm rather fond of Pynchon's essay because it weaves together several strands about Luddism — that Ned Ludd first smashed a stocking frame in Leicestershire in 1779; that Lord Byron was one of the only members of Parliament who opposed legislation in 1812 that would make the smashing of loom machines punishable by death; that Byron wrote "the Song of the Luddites" in 1816 — a "hymn so inflammatory that it wasn't published till after the poet's death" (part of which, incidentally, I have tattooed on my right forearm — "Down with all kings but King Ludd"); that the previous summer, Byron had spend with his friends Percy Shelley and Mary Shelley at a chateau on Lake Geneva where the latter wrote arguably the greatest science fiction novel, Frankenstein. Pynchon calls Frankenstein a Luddite novel, and I think it's fair to call it an ed-tech novel too, because it isn't simply about questions of science and ethics, but about education — or rather the mis-education of both Victor and the creature.

Pynchon does not mention one of the other interesting bits of Byron trivia — and trust me, there are many: his daughter was Ada Lovelace, the mathematician who helped Charles Babbage write algorithms for his proposed Analytical Engine, a general purpose computational machine. Ada Lovelace is often called the first computer programmer. Nor does Pynchon mention that we can trace the history of computing back not only to Lovelace but to the Jacquard machine, an automated loom device invented in 1804 that was run via punchcards — each card corresponding to the pattern to be woven. These would have been nice details for Pynchon to include, I'd argue, as at the end of his essay he speculates that the "Luddite sensibility" would struggle in the dawning Computer Age, that few Luddites would be willing to smash these new machines as they'd believe computers would help us to "cure cancer, save ourselves from nuclear extinction, grow food for everybody, detoxify the results of industrial greed gone berserk." Rather than a machine of exploitation, Pynchon argued, the computer would become a tool for revolution, one that, with its immense data-analyzing capabilities, would finally overturn the "permanent power establishment of admirals, generals and corporate CEO's."

Ha.

Unlike Pynchon, I do believe the "Luddite sensibility" survives, although not necessarily, as the insult suggests, in those who drag their feet in adopting the latest gadget. And this seems to be a perfect time to cultivate it further. Recall, the Luddites emerged in the economic devastation of the Napoleonic Wars — they wanted jobs, yes, but they wanted freedom and dignity. As we face economic devastation today, we need some solidarity and perhaps even a little sabotage. We can look at ed-tech as something to smash knowing that what we aim for are the systems of violence, exploitation, neoliberalism, mechanization, and standardization ed-tech that demands.

This requires more than a Luddite sensibility. It requires a Luddite strategy. And for us, I'd say, it is time for a Luddite pedagogy.

A Luddite pedagogy is not about making everyone put away their laptops during class — remember those days? Again, Luddism is not about the machines per se; it's about machines in the hands of capitalists and tyrants — in the case of ed-tech, that's both the corporations and the State, especially ICE and the police. Machines in the hands of a data-driven school administration. Luddism is about a furious demand for justice, about the rights of workers to good working conditions, adequate remuneration, and the possibility of a better tomorrow — and let's include students in our definition of "worker" here as we do call it "school work" after all.

A Luddite pedagogy is about agency and urgency and freedom. "A Luddite pedagogy is a pedagogy of liberation," Torn Halves writes in Hybrid Pedagogy, "and, as such, it clashes head on with the talk of liberation peddled by advocates of ed-tech. According to the latter, the child, previously condemned to all the unbearably oppressive restrictions of having to learn in groups, can now be liberated by the tech that makes a 1:1 model of education feasible, launching each and every child on an utterly personal learning journey. Liberation as personalization — here the Luddite finds something that ought to be smashed." A Luddite pedagogy doesn't sneer when people balk at new technologies; it doesn't assume they won't use them because they're incompetent; it finds strength in non-compliance.

A Luddite pedagogy is a pedagogy of subversion and transgression. It is a pedagogy of disobedience and dismantling. It is a pedagogy of refusal and of care. It is — with a nod to Jesse's opening keynote — against models and against frameworks (quite literally, Luddites smash frames). It is wildly undisciplined.

Let us be Luddites, not pigeons.

(You can read what I actually talked about in my keynote here.)

30 Jul 06:35

Pigeon Pedagogy

These were my remarks today during my "flipped" keynote at DigPed. You can read the transcript of my keynote here.

We haven't had a dog in well over a decade. Kin and I travel so much that it just seemed cruel. But now, what with the work-from-home orders and no travel til there's a vaccine (and even perhaps, beyond that), we decided to get one.

It's actually quite challenging to adopt a dog right now, as everyone seems to be of the same mind as us. And even before the pandemic, there's a big of a dog shortage in the US. Spay-and-neuter programs have been quite effective, and many states have passed laws outlawing puppy mills. The West Coast generally imports dogs from other parts of the country, but these rescue-relocations have largely been shut down. The shelters are pretty empty.

It's a great time to be a dog.

Adopting a dog is quite competitive, and we have been on multiple waiting lists. But finally, we lucked out, and last week we adopted Poppy. She is a 9 month old Rottie mix. She weighs about 55 pounds. She is not housebroken yet — but we're getting there. She's very sweet and super smart and is already getting better on the leash, at sitting when in the apartment elevator, at sitting at street corners, at sitting when people and other dogs approach her. It's important, I think, if you have a big dog, that you train them well.

If you have a dog, you probably know that the best way to train it is through positive behavior reinforcement. That is, rather than punishing the dog when she misbehaves, the dog should be rewarded when she exhibits the desired behavior. This is the basis of operant conditioning, as formulated by the infamous psychologist B. F. Skinner.

The irony, of course. I've just finished a book on the history of teaching machines — a book that argues that Skinner's work is fundamental to history, to how ed-tech is still built today. Ed-tech is operant conditioning, and we should do everything to resist it, and now I'm going to wield it to shape my dog's behavior.

Some background for those who don't know: As part of his graduate work, Skinner invented what's now known as "the Skinner Box." This "operant conditioning chamber" was used to study and to train animals to perform certain tasks. For Skinner, most famously, these animals were pigeons. Do the task correctly; get a reward (namely food).

Skinner was hardly the first to use animals in psychological experiments that sought to understand how the learning process works. Several decades earlier, for his dissertation research, the psychologist Edward Thorndike had built a "puzzle box" in which an animal had to push a lever in order to open a door and escape (again, often rewarded with food for successfully completing the "puzzle"). Thorndike measured how quickly animals figured out how to get out of the box after being placed in it again and again and again -- their "learning curve."

We have in the puzzle box and in the Skinner Box the origins of education technology — some of the very earliest "teaching machines" — just as we have in the work of Thorndike and Skinner, the foundations of educational psychology and, as Ellen Condliffe Lagemann has pronounced in her famous statement "Thorndike won and Dewey lost," of many of the educational practices we carry through to this day. (In addition to developing the puzzle box, Thorndike also developed prototypes for the multiple choice test.)

"Once we have arranged the particular type of consequence called a reinforcement," Skinner wrote in 1954 in "The Science of Learning and the Art of Teaching," "our techniques permit us to shape the behavior of an organism almost at will. It has become a routine exercise to demonstrate this in classes in elementary psychology by conditioning such an organism as a pigeon.”

"...Such an organism as a pigeon." We often speak of "lab rats" as shorthand for the animals used in scientific experiments. We use the phrase too to describe people who work in labs, who are completely absorbed in performing their tasks again and again and again.

In education and in education technology, students are also the subjects of experimentation and conditioning. Indeed, that is the point. In Skinner's framework, they are not "lab rats"; they are pigeons. As he wrote,

...Comparable results have been obtained with pigeons, rats, dogs, monkeys, human children… and psychotic subjects. In spite of great phylogenetic differences, all these organisms show amazingly similar properties of the learning process. It should be emphasized that this has been achieved by analyzing the effects of reinforcement and by designing techniques that manipulate reinforcement with considerable precision. Only in this way can the behavior of the individual be brought under such precise control.

Learning, according to Skinner and Thorndike, is about behavior, about reinforcing those behaviors that educators deem "correct" — knowledge, answers, not just sitting still and raising one's hand before speaking (a behavior I see is hard-coded into this interface). When educators fail to shape, reinforce, and control a student's behavior through these techniques and technologies, they are at risk, in Skinner's words, of "losing our pigeon."

In 1951, he wrote an article for Scientific American: "How to Train Animals." I pulled it out again to prepare for this talk today and realized that it contains almost all the tips and steps that dog trainers now advocate for. Get a clicker. Use that as the conditioned reinforcer. But then give the treats and associate the click with the reward. (The clicker is faster.) You can train a dog anything in less than twenty minutes, Skinner insisted. And once you're confident with that, you can train a pigeon. And then you can train a baby. And then…

Two years later after that article, Skinner came up with the idea for his teaching machine. Visiting his daughter's fourth grade classroom, he was struck by the inefficiencies. Not only were all the students expected to move through their lessons at the same pace, but when it came to assignments and quizzes, they did not receive feedback until the teacher had graded the materials -- sometimes a delay of days. Skinner believed that both of these flaws in school could be addressed through mechanization, and he built a prototype for his teaching machine which he demonstrated at a conference the following year.

Skinner believed that materials should be broken down into small chunks and organized in a logical fashion for students to move through. The machine would show one chunk, one frame at a time, and if the student answered the question correctly, could move on to the next question. Skinner called this process "programmed instruction." We call it "personalized learning today." And yes, this involves a lot of clicking.

Skinner is often credited with inventing the teaching machine. He didn't. Sidney Pressey, another educational psychologist, had built one decades beforehand. (Skinner said that Pressey's was more testing than teaching machine.) Despite who was or wasn't "the first," Skinner has shaped education technology immensely. Even though his theories have largely fallen out of favor in most education psychology circles, education technology (and technology more broadly) seems to have embraced them — often, I think, without acknowledging where these ideas came from. Our computer technologies are shot through with behaviorism. Badges. Notifications. Haptic alerts. Real-time feedback. Gamification. Click click click.

According to Skinner, when we fail to properly correct behavior — facilitated by and through machines — we are at risk of "losing our pigeons." But I'd contend that with this unexamined behaviorist bent of (ed-)tech, we actually find ourselves at risk of losing our humanity. To use operant conditioning, Skinner wrote in his article on animal training "we must build up some degree of level and again reinforces only louder deprivation or at least permit a deprivation to prevail which it is within our power to reduce." That is, behaviorial training relies on deprivation. Behaviorist ed-tech relies on suffering — suffering that we could eliminate were we not interested in exploiting it to reinforce compliance. This pigeon pedagogy stands in opposition to the Luddite pedagogy I wrote in the text for this keynote.

So, here's to our all being "lost pigeons," and unlearning our training. But dammit, here's to Poppy learning to be a very good and obedient dog.

30 Jul 05:14

How GPT3 Works - Visualizations and Animations

How GPT3 Works - Visualizations and Animations

Nice essay full of custom animations illustrating how GPT-3 actually works.

30 Jul 05:13

A Canadian Sorry

by Dave Pollard


drawing by Elliot Keeler 

Perhaps uniquely, Canadians say “sorry” almost automatically when something goes wrong; it does not (often) imply an admission of guilt, or of remorse. In fact, this is so Canadian that our lawyers had to craft (I am not making this up) the Apology Act, so that prosecuting lawyers couldn’t imply that merely saying “sorry” was an admission of having done something wrong; that it could quite simply be “an act of commiseration”.

I put this propensity down to Canadians’ capacity for embracing complexity, ambiguity and uncertainty. We don’t tend to lay blame for things quickly (and sometimes after a rant we will follow it quickly with a “sorry” for the outburst, and an assurance that “I know we’re all doing our best”).

I am “Canadian-sorry” for many things. That is possibly because my life has been so blessed, so it is in my Canadian nature to feel guilty about that, rather than proud or rhapsodic. Here are some of the things I am sorry for, or about, these days:

I am sorry for being a seemingly incorrigible “joyful pessimist”. Sorry about being joyful, when that comes across as disengaged or unsympathetic. Sorry about being a pessimist, when that comes across as defeatist and undermining.

I am sorry for the terrible state of the world, both human and more-than-human. I more or less know that my contribution to that has been relatively negligible, but I still feel more than relatively responsible.

I am sorry for being a Doubting Thomas, such that when someone comes to me in anger or tears and is looking for compassion and reassurance and I try to parse and understand what they’re saying, so they’re forced to tell me, stupid Canadian that I am, to just STFU and agree with them, for once.

I am sorry for the ghastly plight of the world’s other-than-human animals, both farmed (97% of whom live in incredible misery and stress) and wild (97% of whom live in constant retreat as their habitats are erased by human activity). I’m a vegan, but confess I have on occasion been known to eat a cheese croissant. I excuse my inaction on animal rights on the basis that if I knew or saw more instances of animal cruelty, I would probably commit multiple serious crimes against the perpetrators. Sorry, animals.

I am sorry for being, at least for now, a fan of Radical Non-Duality, which is preposterous and unnerving to those who fear it might lead me to entirely dissociate from all the problems and predicaments of the (illusory) “real” world. Especially their particular problems and predicaments. I am sorry that I cannot explain why that will never happen.

I am sorry for not speaking out more militantly against those, both in my community and in others, who don’t social distance and don’t wear masks and who brag about how their community has done so well during CoVid-19 and are completely oblivious to how outbreaks that kill and permanently sicken people actually occur. I am sorry that I don’t know why I don’t speak out. Potentially shaming people, even for something as dangerous as drunk driving, seems somehow un-Canadian. If it’s because I’m just a conflict-avoiding coward, I’m sorry for that, too.

I am sorry for being so stupid, so self-centred, so inattentive and so un-self-aware as to have said and done all those hurtful things I said and did, and to have failed to say and do all those useful and appropriate things I should have said and done. There is no excuse.

I am sorry for my obliviousness to social, political and economic injustices of all kinds, and for the privilege that has enabled that obliviousness, and for the shameful inaction on my part that obliviousness has enabled.

I am sorry that this list may come across as flip or unsympathetic or passive-aggressive or hurtful or like one of those “Kill all x!; [and then later] you know I was just kidding right?” statements from the shithole president and his toadies and their ilk. I am sorry if this post has caused you to feel there is some barbed double meaning to this post. I am sorry if you think I doth protest too much and that there is definitely an intentional or unintentional double meaning to this post.

I am sorry for the grief and terror and destruction the shithole president and his toadies and their ilk all over the world have done, are doing, and will do, whether or not they are elected, re-elected, or otherwise take or retain power. I can’t help feeling my inaction or ignorance has somehow contributed to and/or abetted this.

I’m sorry for saying “sorry” so much that when you really need me to feel sad and sympathetic for your situation, the word, coming from me, has lost its meaning and comes across as insincere and dismissive.

I’m sorry for being so far ahead of mainstream thinking that most people just cannot fathom what I’m trying to say, so they’ve given up reading this blog. And I’m sorry it sounds so arrogant to say my thinking is far ahead of the mainstream. I didn’t mean you. I didn’t mean anyone. It’s just… I’m sorry.

I’m sorry for all the things you care so deeply about that I just cannot, despite all efforts and a ton of soul-searching and introspection and self-challenging, find it in myself to really care about.

I’m sorry for wasting so much of my and others’ time on things of no importance, and specifically for wasting so much time feeling sorry instead of actually doing something about it.

I’m sorry for being so afraid of so many things that I haven’t gathered up the courage to do things that would make an enormous difference to my life and that of others. And I’m sorry I don’t know why that fear has so paralyzed me.

And I’m sorry it has all been so hard for you. Yes, you. I keep thinking it shouldn’t be this hard, shouldn’t have to be this hard. I keep thinking we make it harder than it has to be, harder than it really is. But then I see how hard it really is for so many, perhaps for you, and I feel ashamed and sorry for even thinking this. Sorry it’s been hard, and hope it gets a bit easier.

And finally, I’m sorry if this post, instead of coming across as a minorly self-deprecating (except for the being too-far-ahead part) and strangely insightful, creative and clever essay, comes across as a veiled desperate plea for reassurance, or forgiveness, or emotional support. Didn’t mean it that way, and so sorry if it struck you that way.

Oh, and so sorry for that thing that happened last week, or was it last month? You know the thing I mean.

30 Jul 05:13

Poly EagleEye Cube :: Kamera mit Verfolger

by Volker Weber

7ac62ce6eae2484ec182dff3106982ae

Kameras in Laptops sind unfassbar schlecht. Selbst die neuesten MacBooks liefern gerade einmal 720p und kacken bei wenig Licht ganz schnell ab. Kein Vergleich zu den Selfie-Kameras in Smartphones. Deshalb hatte ich mir neulich ein Setup gebaut, wo ich eine Smartphone-Kamera in mein Zoom-Setup integriert habe. Warum sind die Laptop-Kameras so schlecht? Das liegt sicher auch daran, dass die Bildschirmdeckel so dünn sind. Die geringe Bautiefe führt zu einen schlechten Kamera.

Das erklärt vielleicht, warum der EagleEye Cube von Poly so dick ist. Oben ist er auf ein Stativ geschraubt, hier hängt er "einfach so" am Bildschirm eines ThinkPads:

bd565ccbe0e6dd5d3931c5c7bde4b3d8

Der Cube kann natürlich noch mehr als dick sein: Er zeichnet Videos mit 4K Auflösung durch ein sehr weitwinkliges Objektiv auf und zaubert dann damit herum. Das sieht man so ein wenig in diesem Bild, wo der Cube auf einen ThinkVision Monitor sitzt:

690b01834258c6789f4e2db6db780a4e

Ich stehe eher neben als vor dem Bildschirm und bin dennoch voll im Bild. Der Cube ist mir einfach gefolgt und hat einen Bildausschnitt gemacht. Laufe ich zur anderen Seite aus dem Bild, folgt er mir erneut. Das hat zwei Vorteile: Einerseits kann man während einer Videokonferenz auch mal aufstehen, statt vor dem Rechner festgenagelt zu sein. Und andererseits kann man mit mehreren Menschen um einen Tisch sitzen und dennoch sind alle im Bild. Da der Cube Mikrofone hat, kann er erkennen, wer gerade spricht und dann einen Ausschnitt um diese Person machen.

9ff46fab74027060c684019f5f846152

In diesem Bild sieht man ein Vorschaubild für eine Teamskonferenz, bei der ich vor dem Tisch stehe. Wenn ich mich danach hinsetze, bin ich immer noch im Bild. Um diese Kamera zu nutzen, muss man zunächst den Poly Companion installieren, der einen Gerätetreiber für die Kamera mitbringt. Auf dem Surface Pro X ließ sich dieser Treiber nicht erfolgreich installieren.

Anmerkung 2020-07-29 103903.jpg

Über den Companion kann man die Kamera auch mit allen Parametern manuell steuern, inklusive des gewünschten Bildausschnitts. In den Admin Settings findet man viele weitere Einstellungen, etwa den Firmware Update. Hier legt man auch fest, ob die Kamera manuell gesteuert werden soll oder welchen Bildausschnitt sie wählen soll. Sie kann statt des Sprechers auch immer alle Menschen vor der Kamera im Bild halten.

5a9fa10aa060e1640b325a1b3f8c2141

Spätestens beim Auspacken des Zubehörs war mir klar, dass das hier keine persönliche Webcam ist, sondern dass die eigentlich in kleine Meetingräume gehört. So gibt es einen soliden Winkel, um den Cube an die Wand zu schrauben, dazu ein dickes USB-C-Kabel und eins für Ethernet. Das USB-Kabel hat sogar eine Schraube, um es so an der Rückseite zu verankern, dass es keinesfalls rausfällt. Der Cube kann auch über Ethernet als IP-Camera eingesetzt werden und lässt sich dann mit PoE (Power over Ethernet) mit Strom versorgt werden. Das habe ich bisher nicht ausprobiert.

7725ecd494312a56358a24a20046558f

Von den Features her finde ich die Kamera super, aber leider nicht von der Bildqualität, was vor allem daran liegen könnte, dass bei meinem Szenario von den 4k auch nicht mehr als 720p übriggeblieben ist, wenn die Kamera auf mich zoomt. Horizontal hat die Kamera einen Blickwinkel von annähernd 120 Grad und die kann ich bei weitem (nicht mehr) ausfüllen. Sehr vorteilhaft sieht man dabei auch nicht aus, weil Weitwinkelobjektive eher "rund" machen.

More >

30 Jul 05:13

The Best Portable Headphone Amp With Built-In DAC

by Lauren Dragan and Brent Butterworth
The Best Portable Headphone Amp With Built-In DAC

Many audio enthusiasts still prefer wired headphones, but the best pairs may require more power than the cheap amps built into phones, tablets, and computers can muster. If your favorite wired headphones sound too quiet or dull when connected to a mobile device, you may need a portable headphone amplifier with a built-in digital-to-analog converter (DAC). We recommend the EarMen Eagle for most people because of its great combination of convenience, performance, and affordability. But we also have recommendations if you need something more powerful or more affordable.

Dismiss
30 Jul 05:11

Stuff that works :: Jabra Elite Active 75t

by Volker Weber

34e213894dbc16b6076fcde94ae0d778

I never really understood why people liked their Jabra Elite 75t earbuds so much, until I put them in my ears. I did not like their design, until I put them in my ears. And I thought they might be earbuds for people who don't have an iPhone. Until ... you get the drift.

Now I think they can beat AirPods in all but one (important) category: AirPods are incredibly easy to pair to Apple devices and they switch between iPhone and Apple Watch automatically. After the fall update, they will get even better in this respect.

Let's start with the design. The form is dictated by two factors: they disappear into your ear, and they have a physical button, which control all the headset functions. Start/stop, answer/end/reject, skip forward and backwards, volume up and down, hear-through on/off.

d6f73c947d1628128f8341561f77df02

The Elite Active 75t use the same Sound+ app like the other Elite and Evolve headsets. And they add a very important feature: My Sound. It's a hearing test, for the right and left ear separately. Tap when you hear a sound. It only takes a minute or so, but then the earbuds are really yours, because they adjust to your personal hearing profile. GN, the parent company of Jabra, makes hearing aids. I can't wait for the other Jabra headphones to get the same feature.

53ac968212c83a902e2af533ed81dd65

What's in the box? A charging case, three sizes of gels, a USB-C charging cable with a USB-A plug at the other end, and the earphones of course. Both have two microphones for phone calls and I find them to work very well. Quite astounding for headphones that don't stick out of your ears.

The Elite 75t are IP53 rated, the Elite Active 75t take that to IP57. Jabra extends a two year warranty for dust or water damage, so the Active variant is perfect for sports.

Since the earbuds fit into your ear canal, you have pretty good passive noise cancellation, and there is a hear-through function which your switch on/off with a button press on your left ear. The volume of that hear-through can be set in the app. Press the button on your right ear to start/stop the music. That means if you want to hear everything around you as if you were not wearing earbuds, press both buttons once. To get back into your zone, do it again. Press and hold on your left ear to turn down the volume, on your right ear to turn it up. Double press left for next track, triple press left to go back. Double press right button to talk to your assistant, be it Siri or Google Assistant. If you connected multiple devices, the one paired last, is called.

3280fdc5f243461f2ef7b16b11048a37

The assistant only talks to your right ear, which is kind of strange. Then it turns out you cannot use the left earbud without the right. If you take one out the music stops. But if you take the left out, you can press the button on the right to restart the music. Put both back in and it starts automatically.

Jabra currently leads the software game with headphones. The Sound+ app lets you customize all aspects of their headphones and it is very well designed. My Sound takes everything to a new level.

More >

30 Jul 05:10

The Best Indoor HDTV Antenna

by Grant Clauser
Our three favorite indoor HDTV antennas, together with their cables.

If you live within 20 to 30 miles of a TV transmitter tower, you can probably get most network and local channels by using a good indoor HDTV antenna instead of paying for live-TV service.

Any antenna’s ability to pick up a given over-the-air TV channel in a given locale is unpredictable, so we test antennas in a variety of places to gauge average performance. Through six test runs in three cities, the Channel Master FLATenna delivered the best average result, so we recommend starting with that model.

If you live more than 20 miles from your area’s main TV transmitter array, you might get better results with the Channel Master FLATenna+, which includes a detachable amplifier.

Dismiss
30 Jul 05:09

No comment necessary

by Charlie Stross
mkalus shared this story from Charlie's Diary.

Post-infection cardiac damage found in 78% of recovering COVID19 patients

That's 78% of a cohort, average age 49, of whom 67% had recovered at home (ie. disease was not categorized as severe enough to need hospitalization). Cohort was normalized with respect to other risk factors relative to uninfected patients. Diagnosis by MRI. Looks reasonably solid, at first glance, publication in JAMA Cardiol. (Journal of the American Medical Association, cardiology). Study coordinated via a German hospital.

Reason for "no comment necessary" is that this suggests most COVID19 survivors—including mild disease survivors—suffer cardiac damage.

You don't want to get this virus.

30 Jul 05:04

Twitter Favorites: [ryanmottau] Side note: do people still say nice things about other people on Twitter? I may be showing my ignorance of the platform.

Ryan Mottau @ryanmottau
Side note: do people still say nice things about other people on Twitter? I may be showing my ignorance of the platform.
30 Jul 05:04

Twitter Favorites: [cdemwell] I just learned that the @vpl actually will buy ebooks you suggest to them, at least some of the time. This will change my reading life.

Chris Demwell @cdemwell
I just learned that the @VPL actually will buy ebooks you suggest to them, at least some of the time. This will change my reading life.