Shared posts

08 Dec 02:12

How (not) to destroy an aspirational OKR

Imagine you’re in a meeting with some of your company’s management in a monthly status meeting. It’s halfway through the year. You’re reviewing your group’s OKRs (Objectives and Key Results, for those of you that haven’t heard the acronym before – it’s the latest management by objective hotness with its own TED talk). One of your objectives has a key result of finding 5000 dingleberries for the year. That’s pretty aspirational, given that you only found 1200 last year.

It’s good to have a challenge. Nobody in the group knew if there was a way to get to 5000, but setting the goal means that your team would go and try to see if there was a way to acquire significantly more dingleberries than ever before.

The bad news, however, is that it’s halfway through the year, and your group has only managed to find 650 dingleberries so far. Either your group’s efforts to significantly increase the number of dingleberries found haven’t yet panned out, or they’re the wrong efforts. The pessimist on your team might say that your group hasn’t really tried anything new, but let’s give the benefit of a doubt here, shall we?

In any case, using the OKR framework, this result tells you that you’ve got more work to do to break free of your current dingleberry acquisition curve and get onto a new one. It might be time to try some new things, or maybe you need to give a bit more time to see if your current efforts will pay off soon. This is valuable information!

Here’s where it can quickly go wrong, however.

Everyone room sees a target number of 5000. Of course, that was supposed to be an aspirational number, but with enough repetition, it’s on the verge of being transformed into something that will grade the group’s performance. And the group is on track to hand in a result of finding 1300 dingleberries.

Failure, even with an almost 8% increase!

At this point, it’s all too tempting for someone in the room to flinch and ask: “Should we revise that number downward so that we can make sure to come in on target?”

🤦‍♀️

In a single stroke, especially if it’s the boss that asks the question, this can destroy the usefulness of the aspirational objective and make clear that the highest priority is the internal need to appear successful. Worse, it’s a great way to encourage your group to sandbag the targets for all of their OKRs next year — not a great outcome.

Now is the time where you need to brace, be strong, and embrace the fact that you’re learning valuable information about what your team has done so far. Now is where it’s time to say, “Well, it looks like whatever we’re doing so far hasn’t panned out yet.”

And then ask, “Why is that? What else are we going to try?”

Otherwise, you might as well just call your target a key performance indicator (KPI), celebrate the 8% increase, and skip the aspirational part of the exercise.

08 Dec 02:11

On moving my blog to Hugo

After finally deciding to throw caution to the wind and move my webstuff back into a static format, this website finally landed on Hugo. I did an initial migration from WordPress to Jekyll, which looked really promising but took waaaaay too long to generate the 2,800+ posts for this site (taking almost 45 minutes?). Hugo runs as a native application, and runs MUCH faster. Generating the entire site currently takes less than 5 seconds, then uploading it to the server via rsync takes only a little longer.
08 Dec 02:10

Black Friday & Cyber Monday 2019 Laptop Specials

by Sean Packham

Get 10% off Librem Laptops

It’s Black Friday! Get 10% off the base Librem 13 v4 and Librem 15 v4 laptops. If you’re looking for added security choose a Pureboot bundle or our anti-interdiction services from the firmware drop-down on the configuration page. Shipping is on us too! We offer free international shipping to pretty much anywhere in the world.

What makes our Librem laptops so special? These are my favorite things:

Get 10% off a Librem Laptop

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the people—stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Preorder now

The post Black Friday & Cyber Monday 2019 Laptop Specials appeared first on Purism.

08 Dec 02:10

Electric Boats

I love boating, but I hate the fact that powerboats guzzle loads of fossil fuel. I assuage my guilt by noting that the distance traveled is small — a couple of hours for each return trip to the cabin, and there are sadly less than twenty of those per year. Then I got into a discussion on /r/boating about whether electric boats are practical, so herewith some scratchpad calculations on that subject.

I’ve ridden on an electric boat, on the Daintree River in Queensland, on a small alligator-watching tour. This thing was flat and had room for maybe fifteen tourists under a canopy, necessary shelter from the brutal tropical sun; on top of the canopy were solar panels, which the pilot told me weren’t quite enough to run the boat up and down the river all day, he had to plug it in at night. The motor was a 70HP electric and our progress along the river was whisper-quiet; I loved it.

I should preface this by saying that I’m not a hull designer nor a battery technologist nor a marine propulsion engineer, so the calculations here have little more precision than Enrico Fermi’s famous atomic-bomb calculation. But possibly useful I think.

Narrowing the question

There are two classes of recreational motorboat: Those that go fast by planing, and those that cruise at hull speed, which is much slower and smoother. Typically, small motorboats plane and larger ones cruise. I’m going to consider my Jeanneau NC 795 as an example of a planing boat, and a Nordic Tug 32 as an example of a cruiser, because there’s one parked next to me and it’s a beautiful thing.

’Rithmetic

My car has a 90 kWh battery, of a size and weight that could be accommodated in either boat quite straightforwardly. A well-designed electric such as a Tesla typically burns 20 kWh/100km but you can’t use all the kWh in a battery, so you can reasonably expect a range of about 400km.

The Jeanneau gets about 1.1 km per liter of fuel while planing (it does two or three times better while cruising along at hull speed). Reviewers say that at 7-8 knots the Nordic burns about a gallon per hour, which my arithmetic says is 3.785 km/L.

A typical gas car gets about 10L / 100km, so 10 km/L. So the Nordic Tug is about 38% as efficient as turning fuel into km as a car, and the Jeanneau is only about 11% as efficient. (And of course both go much slower, but that’s not today’s issue.)

If the same is true for electric “fuel”, the battery that can take a Tesla 400km could take the Nordic tug about 150km and the Jeanneau a mere 44km.

Discussion

There are boats that get worse mileage than the Jeanneau, but they’re super-macho muscle boats or extravagant yachts the size of houses. So for recreational boats accessible to mere mortals, the Jeanneau, which is in a class called “Express Cruiser”, is kind of a worst-case, a comfy family carrier that can be made to go fast on the way to the cabin, but you pay for it.

So the tentative conclusion is that at the moment, batteries are not that attractive for express cruisers. But for tugboat-class craft designed for smoothness not speed, I’d say the time to start building them is now. Among other things, marine engine maintenance is a major pain in boaters’ butts, and electric engines need a whole lot less. On top of which they’re a lot smaller, and space is always at a premium out on the water.

Variations and Futures

The following are things likely to undermine the calculations above:

  1. The Jeanneau could fit in quite a bit bigger battery than most cars; packing it into the hull so that the boat still performs well would be an interesting marine engineering problem.

  2. The Nordic Tug could pretty easily fit in a battery two or three times that size and, at hull speed, I suspect it wouldn’t slow down much.

  3. The torque curves of electric and gas engines are vastly different; which is to say, electrics don’t really have one. You have to get the Jeanneau’s engine up north of 4K RPM to really zoom along, which I suspect is not exactly fuel-optimized performance.

  4. Related to the previous item, I wouldn’t be surprised if there were considerable gains you could pull out of the low-RPM/high-torque electric engine in terms of propeller design and perhaps hull shape.

  5. Battery energy density is improving monotonically, but slowly.

  6. Boats tend to spend their time out under the open sky and many have flat roofs you could put solar panels on to (slowly) recharge your batteries. In fact, many do already just for the 12V cabin batteries that run the fridge and lights.

  7. I expect installation of Level 2 chargers on the dockside would require some innovation for safe operation in an exposed environment full of salt water. I doubt that it’d be practical to offer 50-100kW DC fast-charging gear at the gas barge.

I’ve long lusted after tugboat-style craft, but they’re expensive, bigger (thus moorage is hard to find), and go slower. Given a plausible electric offering, I think I could learn to live with that.

08 Dec 02:10

Weeknote 48/2019

by Doug Belshaw

I’ve been really tired this week, partly because I’ve been sleepwalking so much. That, in turn, is a function of this time of year (when I get more restless in general) and possibly my decision to rebalance my focus a bit in 2020.

Lack of sleep triggers my migraines, so I ended up taking Friday off work as I couldn’t really think properly. Despite that, I still had to work on last-minute contracts for the MoodleNet team.

Having time off is actually a pretty difficult thing for a remote worker. There have been times when I definitely wouldn’t have gone into an office, but have nevertheless been heavily dosed-up on painkillers, lying in bed using my laptop. You also never feel properly “off-duty”.

I talked about these kinds of things, as well as the obvious massive upsides to remote work, in a short presentation I gave at my son’s school this week. In addition, I discussed the importance of keeping your options open, and the benefits of working for yourself or with friends instead of for big corporates.

Other than that, because last week was so intense, I only worked two days this week. My main focus was on ensuring the MoodleNet team performed a retrospective, and then let everyone following the project’s progress know what’s going on. Some of that is captured in this blog post.

On Monday, ostensibly my ‘day off’ I attended We Are Open‘s monthly co-op day, my first for around six months while I was a ‘dormant member’. There’s plenty for me to get my teeth into there, and I’m looking forward to getting more involved in some very tasty client work soon.

Finally, I’ve been preparing for my trip to New York next week to speak at ITHAKA’s Next Wave event. I realised that I’ve been putting off reading Shoshana Zuboff’s epic book The Age of Surveillance Capitalism because I know it will mean making some changes in my own digital life. I’m about a quarter of my way through it, and it couldn’t be a better way to ensure I’m informed for my session.

The title and description I’ve been given is:

Truth, Lies, and Digital Fluency

The internet and social media apps are integral to society, research, and learning today, but increasingly we are questioning the trustworthiness of digital information. How bad is it today, and how much worse can it get? What can and should educators, researchers, information professionals and the companies whose sites enable information sharing do?

The person going following me on the programme is from Facebook and talking about data and elections, so I couldn’t be a better warm-up act, really…

So, yes, next week I’m doing MoodleNet work on Monday and Friday. I’m in New York from Tuesday to Thursday. There’s then two weeks left until Christmas, during which time the MoodleNet team should be able to start some form of federation testing.

Team Belshaw will be in Iceland just before Christmas (including for my birthday) so I’m very much looking forward to that.


Photo of anti-fascist posters in MNAC, Barcelona taken by me last Sunday morning.

08 Dec 02:09

Dependence

Matthias Melcher, x28's New Blog, Nov 30, 2019
Icon

Short post in response to my presentation slides which makes a point I think I should have emphasized more in my talk: " the talk pointed me to the ethics of care and its awareness for vulnerability and dependence. While vulnerability and dependence in education are no doubt a familar aspect for teachers or parents of smaller children, I must confess that I had not spent much thought about them."

Web: [Direct Link] [This Post]
08 Dec 02:09

Stuff I’ve been reading (November 2019)

Things I finished reading in November 2019:

Books

  • Banner, Olivia. Communicative Biocapitalism: The Voice of the Patient in Digital Health and the Health Humanities. University of Michigan Press, 2017.
  • Beer, David. The data gaze: Capitalism, power and perception. Sage, 2018.
  • Bennett, Matthew, et al. Life on the Autism Spectrum: Translating Myths and Misconceptions Into Positive Futures. Springer, 2019.
  • Cipolla, Cyd, Kristina Gupta, David A. Rubin, and Angela Willey, eds. Queer feminist science studies: A reader. University of Washington Press, 2017.
  • Coulthard, Glen Sean. Red Skin, White Masks: Rejecting the Colonial Politics of Recognition. University of Minnesota Press, 2014.
  • Epstein, Steven. Impure science: AIDS, activism, and the politics of knowledge. University of California Press, 1996.
  • Law, John. After method: Mess in social science research. Routledge, 2004.
  • Latour, Bruno. Science in action: How to follow scientists and engineers through society. Harvard university press, 1987.
  • Loukissas, Yanni Alexander. All Data are Local: Thinking Critically in a Data-driven Society. MIT Press, 2019.
  • Murphy, Michelle. Seizing the means of reproduction: Entanglements of feminism, health, and technoscience. Duke University Press, 2012.
  • Rose, Nikolas. Our psychiatric future. John Wiley & Sons, 2018.
  • Rottenberg, Catherine. The rise of neoliberal feminism. Oxford University Press, 2018.
  • Simpson, Audra. Mohawk interruptus: Political life across the borders of settler states. Duke University Press, 2014.

Papers and Chapters

  • Adams, D. L., and Nirmala Erevelles. “Unexpected spaces of confinement: Aversive technologies, intellectual disability, and “bare life”.” Punishment & Society 19.3 (2017): 348-365.
  • Adrian, Stine Willum. “Rethinking reproductive selection: traveling transnationally for sperm.” Biosocieties (2019): 1-23.
  • Amoore, Louise. “Biometric borders: Governing mobilities in the war on terror.” Political Geography 25.3 (2006): 336-351.
  • Ananny, Mike, and Kate Crawford. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society 20.3 (2018): 973-989.
  • Bagatell, Nancy. “From cure to community: Transforming notions of autism.” Ethos 38.1 (2010): 33-55.
  • van Baren-Nawrocka, Jan, Luca Consoli, and Hub Zwart. “Calculable bodies: Analysing the enactment of bodies in bioinformatics.” Biosocieties: 1-25.
  • Barn, Balbir S. “Mapping the public debate on ethical concerns: algorithms in mainstream media.” Journal of Information, Communication and Ethics in Society (2019).
  • Beer, David Gareth. “The Social Power of Algorithms.” Information, Communication and Society (2017): 1-13.
  • Berenstain, Nora. “Epistemic Exploitation.” Ergo, an Open Access Journal of Philosophy 3 (2016).
  • Billawala, Alshaba, and Gregor Wolbring. “Analyzing the discourse surrounding Autism in the New York Times using an ableism lens.” Disability Studies Quarterly 34.1 (2014).
  • Brooks, Emily. “” Healthy Sexuality”: Opposing Forces? Autism and Dating, Romance, and Sexuality in the Mainstream Media.” Canadian Journal of Disability Studies 7.2 (2018): 161-186.
  • Bucher, Taina. “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook.” New Media & Society 14.7 (2012): 1164-1180.
  • Bueter, Anke. “Epistemic injustice and psychiatric classification.” Philosophy of Science (2019).
  • Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016): 2053951715622512.
  • Carel, Havi, and Ian James Kidd. “Epistemic injustice in healthcare: a philosophial analysis.” Medicine, Health Care and Philosophy 17.4 (2014): 529-540.
  • Mac Carthaigh, Saoirse. “Beyond biomedicine: challenging conventional conceptualisations of autism spectrum conditions.” Disability & Society (2019): 1-15.
  • Chiodo, Simona. “The greatest epistemological externalisation: reflecting on the puzzling direction we are heading to through algorithmic automatisation.” AI & Society (2019): 1-10.
  • Christian, Stephen Michael. “Autism in International Relations: A critical assessment of International Relations’ autism metaphors.” European Journal of International Relations 24.2 (2018): 464-488.
  • Crichton, Paul, Havi Carel, and Ian James Kidd. “Epistemic injustice in psychiatry.” BJPsych bulletin 41.2 (2017): 65-70.
  • Davis, Emmalon. “Typecasts, tokens, and spokespersons: A case for credibility excess as testimonial injustice.” Hypatia 31.3 (2016): 485-501.
  • Van Dijck, José. “Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology.” Surveillance & Society 12.2 (2014): 197-208.
  • Dinishak, Janette. “The deficit view and its critics.” Disability Studies Quarterly 36.4 (2016).
  • Dohmen, Josh. “A little of her language”: epistemic injustice and mental disability.” Res Philos 93.4 (2016): 669-691.
  • Dosch, Rebecca. “Resisting Normal: Questioning Media Depictions of Autistic Youth and Their Families.” Scandinavian Journal of Disability Research 21.1 (2019).
  • Dotson, Kristie. “Tracking epistemic violence, tracking practices of silencing.” Hypatia 26.2 (2011): 236-257.
  • Dotson, Kristie. “Conceptualizing epistemic oppression.” Social Epistemology 28.2 (2014): 115-138.
  • Duffy, John, and Rebecca Dorner. “The pathos of” mindblindness”: autism, science, and sadness in” Theory of Mind” narratives.” Journal of Literary & Cultural Disability Studies 5.2 (2011): 201-215.
  • Dyck, Erika, and Ginny Russell. “Challenging Psychiatric Classification: Healthy Autistic Diversity the Neurodiversity Movement.” Healthy Minds in the Twentieth Century. Palgrave Macmillan, Cham, 2020. 167-187.
  • Fallin, Mallory, Owen Whooley, and Kristin Kay Barker. “Criminalizing the brain: Neurocriminology and the production of strategic ignorance.” Biosocieties 14.3 (2019): 438-462.
  • Gillespie, Tarleton. “The relevance of algorithms.” Media technologies: Essays on communication, materiality, and society 167 (2014): 167.
  • Grinker, Roy Richard. “Autism,“Stigma,” Disability: A Shifting Historical Terrain.” Current Anthropology 61.S21 (2019): 000-000.
  • Guilfoyle, Michael. “Client subversions of DSM knowledge.” Feminism & Psychology 23.1 (2013): 86-92.
  • Hacking, Ian. “Humans, aliens & autism.” Daedalus 138.3 (2009): 44-59.
  • Ho, Anita. “Trusting experts and epistemic humility in disability.” IJFAB: International Journal of Feminist Approaches to Bioethics 4.2 (2011): 102-123.
  • Holton, Avery E., Laura C. Farrell, and Julie L. Fudge. “A threatening space?: Stigmatization and the framing of autism in the news.” Communication Studies 65.2 (2014): 189-207.
  • Holton, Robert, and Ross Boyd. “‘Where are the people? What are they doing? Why are they doing it?’(Mindell) Situating artificial intelligence within a socio-technical framework.” Journal of Sociology (2019): 1440783319873046.
  • Jones, Karen. “The politics of intellectual self-trust.” Social Epistemology 26.2 (2012): 237-251.
  • Kapp, Steven. “Introduction to the Neurodiversity Movement” Autistic Community and the Neurodiversity Movement. Palgrave Macmillan, Singapore, 2020. 2-9.
  • Kearney, Elizabeth, Antonina Wojcik, and Deepti Babu. “Artificial intelligence in genetic services delivery: Utopia or apocalypse?.” Journal of Genetic Counseling (2019).
  • Kitchin, Rob. “Big Data, new epistemologies and paradigm shifts.” Big Data & Society 1.1 (2014): 2053951714528481.
  • Kitchin, Rob, and Tracey Lauriault. “Towards critical data studies: Charting and unpacking data assemblages and their work.” (2014).
  • Kopelson, Karen. “” Know thy work and do it”: The Rhetorical-Pedagogical Work of Employment and Workplace Guides for Adults with” High-Functioning” Autism.” College English 77.6 (2015): 553-576.
  • Lafrance, Michelle N., and Suzanne McKenzie-Mohr. “The DSM and its lure of legitimacy.” Feminism & Psychology 23.1 (2013): 119-140.
  • Langan, Mary. “Parental voices and controversies in autism.” Disability & Society 26.2 (2011): 193-205.
  • Leveto, Jessica A. “Toward a sociology of autism and neurodiversity.” Sociology Compass 12.12 (2018): e12636.
  • Maclure, Jocelyn. “The new AI spring: a deflationary view.” AI & Society (2019): 1-4.
  • Massumi, Brian. “The Future Birth of the Affective Fact: The Political Ontology of Fact.” The Affective Theory Reader: 52-70.
  • McKinnon, Rachel. “Epistemic injustice.” Philosophy Compass 11.8 (2016): 437-446.
  • Medina, José. “Varieties of Hermeneutical Injustice” The Routledge Handbook of Epistemic Injustice. Routledge, 2017. 41-52.
  • Milton, Damian EM. “On the ontological status of autism: the ‘double empathy problem’.” Disability & Society 27.6 (2012): 883-887.
  • Milton, Damian EM. “Autistic expertise: a critical reflection on the production of knowledge in autism studies.” Autism 18.7 (2014): 794-802.
  • Muller, Michael, et al. “How Data Science Workers Work with Data: Discovery, Capture, Curation, Design, Creation.” Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019.
  • Olesen, Esben. “Overcoming Diagnostic Uncertainty: Clinicians, Patients and Institutional Work in Practice.” Scandinavian Journal of Disability Research 21.1 (2019).
  • Pálsson, Gísli. “How deep is the skin? The geneticization of race and medicine.” BioSocieties 2.2 (2007): 257-272.
  • Panofsky, Aaron, and Joan Donovan. “Genetic ancestry testing among white nationalists: From identity repair to citizen science.” Social Studies of Science 49.5 (2019): 653-681.
  • Passi, Samir, and Steven J. Jackson. “Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects.” Proceedings of the ACM on Human-Computer Interaction 2.CSCW (2018): 136.
  • Peña-Guzmán, David M., and Joel Michael Reynolds. “The Harm of Ableism: Medical Error and Epistemic Injustice.” Kennedy Institute of Ethics Journal 29.3 (2019): 205-242.
  • Van der Ploeg, Irma. “Normative assumptions in biometrics: On bodily differences and automated classifications.” Innovating Government. TMC Asser Press, 2011. 29-40.
  • Pohlhaus, Gaile. “Varieties of Epistemic Injustice” The Routledge handbook of epistemic injustice. Routledge, 2017. 13-26.
  • Queirós, Filipa. “The visibilities and invisibilities of race entangled with forensic DNA phenotyping technology.” Journal of forensic and legal medicine 68 (2019): 101858.
  • Quirici, Marion. “Geniuses without imagination: Discourses of autism, ability, and achievement.” Journal of Literary & Cultural Disability Studies 9.1 (2015): 71-88.
  • O’Reilly, Michelle, Jessica Nina Lester, and Nikki Kiyimba. “Autism in the Twentieth Century: An Evolution of a Controversial Condition.” Healthy Minds in the Twentieth Century. Palgrave Macmillan, Cham, 2020. 137-165.
  • Richards, Michael. “‘You’ve got autism because you like order and you do not look into my eyes’: some reflections on understanding the label of ‘autism spectrum disorder’from a dishuman perspective.” Disability & Society 31.9 (2016): 1301-1305.
  • Rouvroy, Antoinette, Thomas Berns, and Elizabeth Libbrecht. “Algorithmic governmentality and prospects of emancipation.” Réseaux 1 (2013): 163-196.
  • Rouvroy, Antoinette. “The end (s) of critique: Data behaviourism versus due process.” Privacy, due process and the computational turn. Routledge, 2013. 157-182.
  • Russell, Ginny. “Critiques of the Neurodiversity Movement.” Autistic Community and the Neurodiversity Movement. Palgrave Macmillan, Singapore, 2020. 287-303.
  • Schalk, Sami. “Reevaluating the supercrip.” Journal of Literary & Cultural Disability Studies 10.1 (2016): 71-86.
  • Scully, Jackie Leach. “From “She Would Say That, Wouldn’t She?” to “Does She Take Sugar?” Epistemic Injustice and Disability.” IJFAB: International Journal of Feminist Approaches to Bioethics 11.1 (2018): 106-124.
  • Seaver, Nick. “What should an anthropology of algorithms do?.” Cultural Anthropology 33.3 (2018): 375-385.
  • Simpson, Audra. “The ruse of consent and the anatomy of ‘refusal’: Cases from indigenous North America and Australia.” Postcolonial Studies 20.1 (2017): 18-33.
  • Skinner, David. “Forensic genetics and the prediction of race: What is the problem?.” Biosocieties (2018): 1-21.
  • Star, Susan Leigh, and Geoffrey C. Bowker. “Enacting silence: Residual categories as a challenge for ethics, information systems, and communication.” Ethics and Information Technology 9.4 (2007): 273-280.
  • Stevenson, Jennifer L., Bev Harp, and Morton Ann Gernsbacher. “Infantilizing Autism.” Disability Studies Quarterly: DSQ 31.3 (2011).
  • Sweet, Paige L., and Claire Laurier Decoteau. “Contesting normal: The DSM-5 and psychiatric subjectivation.” Biosocieties 13.1 (2018): 103-122.
  • Tapaninen, Anna-Maria, and Ilpo Helén. “Making up families: how DNA analysis does/does not verify relatedness in family reunification in Finland.” Biosocieties: 1-18.
  • Thomas, Suzanne L., Dawn Nafus, and Jamie Sherman. “Algorithms as fetish: Faith and possibility in algorithmic work.” Big Data & Society 5.1 (2018): 2053951717751552.
  • Thatcher, Jim, David O’Sullivan, and Dillon Mahmoudi. “Data colonialism through accumulation by dispossession: New metaphors for daily data.” Environment and Planning D: Society and Space 34.6 (2016): 990-1006.
  • Timimi, Sami, et al. “Deconstructing Diagnosis: Four Commentaries on a Diagnostic Tool to Assess Individuals for Autism Spectrum Disorders.” Autonomy (Birmingham, England) 1.6 (2019).
  • Treweek, Caroline, et al. “Autistic people’s perspectives on stereotypes: An interpretative phenomenological analysis.” Autism 23.3 (2019): 759-769.
  • Verhoeff, Berend. “What is this thing called autism? A critical analysis of the tenacious search for autism’s essence.” BioSocieties 7.4 (2012): 410-432.
  • Verhoeff, Berend. “Autism in flux: a history of the concept from Leo Kanner to DSM-5.” History of Psychiatry 24.4 (2013): 442-458.
  • De Vries, Katja. “Identity, profiling algorithms and a world of ambient intelligence.” Ethics and Information Technology 12.1 (2010): 71-85.
  • Wajcman, Judy. “Automation: is it really different this time?.” The British Journal of Sociology 68.1 (2017): 119-127.
  • Walmsley, Jan. “Healthy Minds and Intellectual Disability.” Healthy Minds in the Twentieth Century. Palgrave Macmillan, Cham, 2020. 95-111.
  • Wanderer, Jeremy. “Varieties of testimonial injustice.” The Routledge Handbook of Epistemic Injustice. Routledge, 2017. 27-40.
  • Wardrope, Alistair. “Medicalization and epistemic injustice.” Medicine, Health Care and Philosophy 18.3 (2015): 341-352.
  • Whooley, Owen. “Diagnostic ambivalence: psychiatric workarounds and the Diagnostic and Statistical Manual of Mental Disorders.” Sociology of Health & Illness 32.3 (2010): 452-469.
  • Williams, Rua M. “Autonomously Autistic.” Canadian Journal of Disability Studies 7.2 (2018): 60-82.
  • Williams, Rua M., and Juan E. Gilbert. “Cyborg Perspectives on Computing Research Reform.” Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019.
  • Williams, Rua M. “Metaeugenics and Metaresistance: From Manufacturing the ‘Includeable Body’ to Walking Away from the Broom Closet.” Canadian Journal of Children’s Rights 6.1
08 Dec 02:09

"A Special Note from Norm's Son"

by peter@rukavina.net (Peter Rukavina)

When my father retired from his position as a research scientist at the Canada Centre for Inland Waters, he took on the task of editing and distributing a monthly newsletter to his fellow retirees. By the time he handed over the editorial reins in 2018, he’d put out 100 issues, filled with announcements, jokes, cartoons and updates. He took the newsletter very seriously, and my mother, brothers and I all have memories of various family vacations and functions requiring time set aside to allow Dad to set up his laptop in an impromptu workspace to ensure the newsletter went out on time.

When Dad died two weeks ago, I sent a note to the new editors of the newsletter, and they sent word of his passing to the retirees mailing list, which prompted a flood of messages of condolence, often with work stories from the early years.

I offered to send a brief note for inclusion in the newsletter, on behalf of my brothers and I, which they generously agreed to include. This is what I wrote:

A Special Note from Norm’s Son

Our father, Norm Rukavina, longtime editor of this newsletter, died this week at the age of 82.

There has never been a time in our lives when we didn’t closely associate Dad with “The Centre”: the family moved from Ottawa in 1968 so that he could take up work at CCIW, and he remained there for his entire career. His involvement with CCIW and, later, NWRI, was the backbone of his life and, as his kids, the backbone of our childhood. During the late 1960s and through the 1970s we joined Dad in the field each summer on the shore of whatever Great Lake he was focused on the nearshore sediments of at the time; while he took core samples, we learned about salamanders from park rangers. We watched the Moon landing in the back of a Government of Canada VW bus while in the field. We all remember the experience of getting presents from the CCIW Santa Claus every Christmas in the auditorium, and we marvelled at how closely our presents matched what we’d discussed with our parents. Other memories include the CCIW open houses, getting to eat in the cafeteria when Dad would take us to work, being on a first name basis with the Commissionaires. And nightly references around the supper table to mysterious places like “Hydraulics” and “Tech Ops” and “Drafting,” of which we knew little.

It took Dad a long time to retire: we always got the impression this was a combination of there actually being follow-up work that needed doing, with, perhaps, a sense that he wasn’t quite sure what would happen if he stopped working altogether. Eventually, however, there came a day to load the last cardboard box into the car, hand in his pass, and drive home for good. After retirement Dad took great pleasure in editing the retirees’ newsletter: it kept him in touch with good friends, kept him wise to the latest technology, and provided him with a steady stream of jokes to send his children. He also enjoyed the get-togethers enormously, and would speak to us of people he’d meet up with, names we’d been hearing for years and years and years.

From the day he started work in Burlington, Dad kept a daily journal, mostly just bullet points about what had been achieved that day. We picked some off the shelf tonight and were amazed that the cast of characters in his working life in 1968 included many people we’ve heard from on the phone or by email this week with memories and messages of condolence. It’s brought us tremendous comfort to know that Dad was a part of so many people’s lives. Working at The Centre was, to we kids, simply what a Dad did, and so formed our assumption about how working life worked. It was a pretty good template to start out with.

Thank you to everyone who’s reached out this week to us; it’s truly appreciated. We’re suggesting that those who want to memorialize Dad make a donation to the Joseph Brant Hospital Foundation (jbhfoundation.ca): Joseph Brant took excellent care of Dad in recent years, and in his final weeks. It’s also been an important part of our family’s life for more than 50 years, most recently with the more than 1,500 hours of volunteer work Dad did there after retirement.

The editors, Jo-ann and Clint, added a note of their own:

Norm was instrumental in starting our CCIW retirees coffee club, which now has over 200 members, and produced 100 issues of this newsletter, up until May 2018. We have big shoes to fill. It’s a great help in keeping the retirees in touch with each other, and our monthly coffee meetings are well attended, usually numbering 25 to 40 people. Norm was a wonderful man, very well liked and respected.

Norm was also one of the original Research Scientists at CCIW working in the then Limnogeology Group. He was one of the “Trailer Trash”, working on site in the trailers before the current building was erected. He was a good friend and colleague and he will be long remembered and missed by many.

Until last week I never truly understood how valuable, important and comforting condolences are: the notes, cards, and emails we’ve received, and the people who’ve stopped me on the street, or at the market, to express their sympathies have all offered tremendous comfort. Who knew.

Norm Rukavina

08 Dec 02:09

Vilnius Christmas tree

by Stephen Rees
©GO Vilnius

This image came from a Press Release which I will copy and paste below. I will spare you my opinions about cutting down trees, and Christmas in general. I will say that this is simply a promotional item from Go Vilnius, the Official Development Agency of the City of Vilnius and I did not receive any payment or other benefit from this post. I have never visited Vilnius and I am not about to promote it here – and I have edited out some of the more exaggerated claims.

But I did think that using an old chess piece as a model was a Good Idea.

I am sure if you want to find out more about Vilnius you know how to do that and do not actually need me to provide link(s).

November 30, 2019: The traditional lighting of the Christmas tree in Vilnius attracted citizens and guests alike. The capital of Lithuania has received a lot of global attention over the years for its unique and stunning Christmas trees, and this year is no exception. This year, the decorated Christmas tree resembles the 14-15th century Queen figure from the game of chess, which was found by archaeologists in 2007.

Decorations adorn the already traditional 27-meter tall metal construction, which bears some 6,000 branches. The construction is specially designed to create a completely sustainable Christmas tree. All the actual tree branches used in the construction are defiled from the trees by foresters while carrying out the general maintenance of the forest. Therefore not only trees but even branches are not cut just for the spectacle.

The particular figure which served as a model for decorations was found during the archeological excavations around the Ducal Palace in Vilnius. Dating back to the 14th-15th century, the beautifully ornamented figure was made of spindle tree. Its middle part is carved with geometrical patterns and topped with floral ornaments. According to historians, the game of chess was played by the nobility of the Grand Duchy of Lithuania from the end of the 14th century.

A traditional Christmas market is set up around the Christmas tree, along with another one located at the Town Hall Square. The markets will stay open from the 30th of November to the 7th of January. 

08 Dec 02:09

Red Season

by Rui Carmo

The past month went by in a (quite literal) daze, and much to my disappointment I did not manage to do much reading or anything related to music other than spending some time noodling with Notion - not the note-taking app, but the music composition one, which is terrific and has a moderately decent library of orchestral sounds.

Why is this important, you ask? Well, because it supports basic articulations and expressive notation, which are an utter pain to do in conventional iOS DAWs. The audio processing could be a lot better (the built-in EQ is pretty basic and does not support AUv3 plugins, which would be a nice way to extend the app without making it even more complex), but I’m not in a hurry to make perfect things.

Nodal Pain Points

I spent a few hours over the past few weeks doing some server-side JavaScript (both raw and with the added scaffolding afforded by Node-RED, partly because I decided to build a custom node for it), which entailed updating my reference containers, messing about with express and wondering if we wouldn’t all be better off if event emitters and traditional Promises had never happened and the language had just gone straight to async/await.

The overall feeling is still the same: Every time I go out and pick up anything that is older than three months, code styles, dependencies and just about everything else have changed. npm audit, in particular, has become beyond scary.

The contrast to every other language runtime I use is still marked, and were it not for wanting to minimize build steps (and mental overhead) I would probably have done the whole thing in TypeScript. Or ClojureScript.

I really, really miss writing Clojure.

Ci/CD All The Things

I finally sat down and figured out GitHub Actions (which, as expected, are hardly different from Azure Pipelines) and added some fairly extensive (but still basic) testing to Piku, which is now tested on all versions of Python from 3.5 to 3.8 and (by building testing containers inside the pipeline) with the OS packages that ship in current Debian and Ubuntu LTS.

This was largely driven by Piku having sped up its development cycle considerably (thanks to a few new contributors) and new features and tweaks being proposed and added on a weekly basis – some of which were unwittingly breaking existing deployments.

I’m really happy with how Piku itself is evolving, but have had little time to hack on the core, although I am working on an Azure deployment script (via cloud-init, obviously) and have gotten it to deploy Node-RED in a largely sane fashion.

On the other hand, I just realised I now have working, hands-on (and fairly sophisticated) experience with pretty much every single major CI/CD setup out there: Azure Pipelines, GitHub Actions, GitLab CI, Atlassian Bamboo, Travis CI and, of course, Jenkins (which, to be quite honest, is starting to seem truly ancient and brittle).

I feel a need to get a similarly broad handle on build artifacts next (I’d like to automate Piku releases as part of the pipeline, for starters), but things are largely OK in that regard – most of what I use or deliver are containers anyway.

However, I suspect that upcoming work mayhem is going to put a clamp on that. But at least I managed to find enough time this month, if not necessarily for the best of reasons.


Support this site
08 Dec 02:08

An Epidemic of AI Misinformation

by Volker Weber
The media is often tempted to report each tiny new advance in a field, be it AI or nanotechnology, as a great triumph that will soon fundamentally alter our world. Occasionally, of course, new discoveries are underreported. The transistor did not make huge waves when it was first introduced, and few people initially appreciated the full potential of the Internet. But for every transistor and Internet, there are thousands or tens of thousands of minor results that are overrreported, products and ideas that never materialized, purported advances like cold fusion that have not been replicated, and experiments that lie on blind alleys that don't ultimately reshape the world, contrary to their enthusiastic initial billing.

More >

08 Dec 02:08

Teaching my son how the web works

by Dries

For the first time, I taught my twelve year old son some HTML and CSS. This morning after breakfast we sat down and created a basic HTML page with some simple styling.

I explained to him that is the most powerful HTML tag of them all ...

It was a special experience for both of us. It looks like I sparked his interest as later he asked where he can learn about different HTML tags. I loved that he shared an interest to learn more.

But it also made me think that rather than just teach him HTML and CSS syntax, I want to help him develop an appreciation for how the web works. I'll have to think about how to best explain concepts like HTTP, DNS, IP addresses, and maybe even TCP.

08 Dec 02:08

The Bones We Leave Behind

It can seem as if the tide has begun to turn against facial recognition technology. Controversies — from racist and transphobic implementations appearing in policing to the concerns of privacy advocates about the billions of images these systems gather — have drawn attention to the risks the technology poses and its potential to strengthen an already overwhelming carceral state while perpetuating so-called surveillance capitalism.

Citing concerns around civil liberties and racial discrimination, the cities of Oakland, San Francisco, and Somerville, Massachusetts, have all recently banned the technology’s use by law enforcement. The state of California is considering a bill to curtail facial recognition use in body cameras. And activists from Chicago to Massachusetts have mobilized to prevent the expansion of facial recognition systems in, for example, public housing. It’s almost as though the techno-dystopia of ubiquitous state and corporate surveillance could be stopped — or at least delayed until the next train.

All this organizing and protesting is good and vital work; the groups doing it need support, plaudits, and solidarity. Facial recognition is a dangerous technology, and prohibiting it — and in the interim, resisting its normalization — is vital. But focusing on “facial recognition” — a specific technology, and its specific institutional uses — carries risks. It treats the technology as if it were a discrete concept that could be fought in isolation, as if it could simply be subtracted to “fix” a particular concern. But facial recognition, like every other technology, is dependent on a wide range of infrastructures — the existing technologies, practices, and flows that make it possible. Pushing back against facial recognition technology without considering its supporting infrastructure may leave us in the position of having avoided future horrors, but only future horrors.

Some of the preconditions for facial recognition technology are cultural and historically rooted. As I’ve previously pointed to, the work of Simone Browne, C. Riley Snorton, Toby Beauchamp, and many others shows how unsurprising it is that much of this technology — originating as it does in a society built on xenophobia, settler colonialism, and antiblackness — has been developed for biased and oppressive surveillance. The expansion of surveillance over the past few decades — as well as the pushback sparked when it begins to affect the white and wealthy — cannot be understood without reference to the long history of surveilling the (racialized, gendered) other. U.S. passports originated in anti-Chinese sentiment; state-oriented classification (undergirded by scientists and technologists) often structured itself around anti-indigenous and/or anti-Black efforts to separate out the other. Most recently, the war on drugs, the border panics of the 1990s, and the anxious paranoia of the Cold War have all legitimized the expansion of surveillance by raising fears of a dangerous other who seeks to do “us” (that is, normative U.S. citizens) harm and is either so dangerous as to need new technologies, so subtle as to be undetectable without them, or both.

It is that history which works to justify the current development of technologies of exclusion and control — facial recognition, fingerprinting, and other forms of tracking and biometrics. These technologies — frequently tested at borders, prisons, and other sites “out of sight” — are then naturalized to monitor and control the “normal” as well as the “deviant.”

But beyond the sociocultural conditions that make it ideologically possible, a facial recognition system requires a whole series of other technological systems to make it work. Historically, facial-recognition technology worked like this: a single static image of a human face would have points and lines mapped on it by an algorithm. Those points and lines, and the relationship between them, would then be sent to a vast database of existing data from other images, with associated names, dates, and similar records. A second algorithm would compare this extracted structure to existing ones and alert the operator if one or more matching photographs were found. To make this possible, we needed cultural conventions and norms (presenting identity cards when asked; the acceptability of CCTV cameras) but also technical infrastructure — those algorithms, that database, the hardware they run on, and the cables connecting them to an operator and their equipment.

Unfortunately (or fortunately) this approach to facial recognition did not work very well. As late as this 2010 report on the impact of lighting on its accuracy, the best–performing algorithm in “uncontrolled” settings (i.e. any environment less consistent than a passport-photo booth) had a 15 percent false-positive rate. The reason you have to make the same face on the same background in every passport photo isn’t just because the State Department wants to make you suffer (although, for clarity, it absolutely does); it’s because facial-recognition algorithms for the longest time were utterly incapable of handling even minor differences in head angle or lighting between a “source” photo and “target” photos.

The technology has since improved, but not because of a series of incremental algorithmic tweaks. Rather, the massive increase in high-resolution cameras, including video cameras, over the past decade has led to an overhaul in how facial-recognition technology works. Rather than being limited to a single hazy frame of a person, facial-recognition systems can now draw on composite images from a series of video stills taken in sequence, smoothing out some of the worst issues with lighting and angle and so making the traditional approach to facial-recognition usable. As long as you had high-quality video rather than pixelated single-frame CCTV shots, you could correct for most of the problems that appear in uncontrolled capture.

But then researchers took this one step further: Realizing that they had these such high-quality image sequences available from these new (higher resolution, video-based) cameras, they decided to write algorithms that would not simply cut out the face from a particular image and assess it through points and lines, but reconstruct the face as a 3D model that could be adjusted as necessary, making it “fit” the angle and conditions of any image it might be compared with. This approach led to a massive increase in the accuracy of facial recognition. Rather than the accuracy rate of 85 percent in “uncontrolled capture” that once prevailed, researchers in 2018 testing against 10 data sets (including the standard one produced by the U.S. government) found accuracy rates of, at worst, 98 percent.

This level of accuracy in current facial recognition technology allows authorities to dream of an idealized, ubiquitous system of tracking and monitoring — one that meshes together pre-existing CCTV systems and new “smart” city technology (witness San Diego’s default integration of cameras into their new streetlights) to trace individuals from place to place and produce archives that can be monitored and analyzed after the fact. But to buy into this dream — to sign up for facial recognition technology — a city often also has to sign up for a network of HD video cameras, streaming data into central repositories where it can be stored ad infinitum and combed through to find those who are at any time identified as “suspicious.”

A contract for the algorithms comes with a contract for the hardware they run on (or, in the case of Amazon’s Ring, free hardware pour encourager les autres). There is no standalone facial-recognition algorithm; it depends on certain other hardware, other software, certain infrastructure. And that infrastructure, once put into place, always contains the potential for facial recognition, whether facial recognition is banned or not, and can frequently be repurposed for other surveillance purposes in its absence.

The layers of infrastructure involved make facial recognition technology hard to constrain. San Francisco’s ordinance, for example, bans facial recognition outright but is much more lenient when it comes to the camera networks and databases feeding the algorithms. If a city introduces facial-recognition technology and you spend a year campaigning against it and win, that’s great — but the city still has myriad video cameras logging public spaces and storing them for god knows how long, and it still has people monitoring that footage too. This process may be much less efficient than facial recognition, but it’s still the case that we’ve succeeded in just swapping out analytics technology for a bored police officer, and such creatures aren’t widely known for their deep commitment to anti-racism. And this isn’t hypothetical: 24/7 monitoring of live, integrated video feeds is exactly what Atlanta does.

Beyond that, leaving the skeleton of the surveillance infrastructure intact means quite simply that it can be resurrected. Anyone who watches horror films knows that monsters have a nasty tendency to be remarkably resilient. The same is true of surveillance infrastructure. Facial recognition is prohibited in Somerville now. In one election’s time, if a nativist wind blows the wrong way, that might no longer be the case. And if the city was using the technology prior to banning it and has left the infrastructure in place, switched on and recording, it will be able to surveil people not only in the future but in the past to boot. It becomes trivial, when the technology is re-authorized, to analyze past footage and extract data about those who appear in it. This enacts what Bonnie Sheehey has called “temporal governmentality,” where one must, even in the absence of algorithmic surveillance, operate as if it were occurring because it might in the future be able to retroactively undertake the same biased practices. And when facial recognition is as cheap and easy as a single software update, it’s not going to take long to turn it back on.

Luckily, the movie solutions to killing monsters line up with facial recognition too: removing the head or destroying the brain. That is, eliminating the infrastructures on which it depends — infrastructures that can, in the interim, be used for less efficient but still dangerous forms of social control. Banning facial recognition formally is absolutely a start, but it provides lasting protection only if you plan to print off the ordinance and glue it over every surveillance camera that’s already installed. We must rip out those cameras, unplug those servers. Even “just” a network of always on, eternally stored HD cameras is too much — and such a network leaves us far more vulnerable to facial recognition technology’s resurrection than we were before its installation.

There is nothing wrong with the tactics of activist movements set up around facial recognition specifically; protesting, organizing, forcing transparency on the state and using that to critically interrogate and educate on surveillance practices is both good work and effective. But what we need to do is ensure we are placing this technology in context: that we fight not only facial recognition, a single symptom of this wider disease, but the underlying condition itself. We should work to ban facial recognition, and we should celebrate when we succeed — but we should also understand that “success” doesn’t just look like putting the technology in the grave. It looks like grinding down the bones so it could never be resurrected.

08 Dec 02:08

Cellar Door Software

by peter@rukavina.net (Peter Rukavina)

When I was 16 years old, my father and I started a company together called Cellar Door Software.

We got the name from the CBC: one day we were listening to the radio in the car and heard a segment where listeners had been invited to submit nominations for the most mellifluous words in the English language; someone suggested cellar door. We agreed. And that became the name.

(We also had a pretty nice-sounding cellar door at our family home in Carlisle).

The personal computer was the grand overlap between my life and Dad’s: he was an early adopter of computers, using them from the punch-card days onward in his work as a scientist. We both became fascinated, in the early 1980s, with personal computers, eventually acquiring a Radio Shack Color Computer for the family.

Cellar Door Software became an umbrella for two projects: my work as a programmer, and our joint work offering computer courses, both at the local high school at at the Hamilton YMCA, to children and adults.

We borrowed $2000 from CIBC to start the business in September of 1982 and we ran it for three years until we closed it down–likely, if memory serves, because I moved away from home–in August of 1985.

We used the $2000 for a bunch of capital expenses, which my brother Mike found in a PDF file this week:

  • Grand & Toy filing cabinet ($64.64)
  • Centronics printer ($616.25)
  • Sears cassette recorder ($40.53)
  • Texas Instruments cassette recorder ($69.00)
  • Used Atari 400 computer ($80.00)
  • Two used 5-1/4” disk drives ($10.65)
  • TRS-80 Model 4 computer ($1000.00)
  • Electrohome monitor ($170.13)

The printer (a dot-matrix) and the TRS-80 Model 4 were both in service of my work for Neil Evenden at Skycraft Hobbies, where I modified an inventory control system to better suit the needs of his hobby shop.

Our primary source of income otherwise were our courses at the high school and at the Y: in both cases we used Sinclair ZX-81 computers and black and white televisions, one setup for every two students. We taught the very most basic programming, like:

10 PRINT "HELLO WORLD!"
20 GOTO 10

It was my first exposure to teaching, my first exposure to “entrepreneurship,” and helped me pay for university.

Over the three years we ran the business we earned a total of $3946.30.

At some point during our business tenure I had an opportunity to take a batik course, and I created a sign for the business; it’s hung in Dad’s workshop ever since:

Cellar Door Software

08 Dec 02:06

Paw, a macOS API tool

by Rui Carmo

I’m linking to Paw not because they gave away licenses during Black Friday (full disclosure: I got one), but because native, quality macOS apps deserve support and praise in this age of junk cross-platform Electron-based “apps” that try to shoehorn another Chromium runtime onto your machine.

Plus I use this kind of thing regularly, and it’s nice to have a native macOS tool I can rely on. You know what to do.


Support this site
08 Dec 02:05

When computer science has to be a requirement if we want it to be available to everyone

by Mark Guzdial

Robert Sedgewick had an essay published at Inside Higher Ed last month on Should Computer Science Be Required? (see link here). He has some excellent reasons for why students should study computer science. Several of them overlap with reasons I’ve suggested (see blog post here). He writes:

Programming is an intellectually satisfying experience, and certainly useful, but computer science is about much more than just programming. The understanding of what we can and cannot do with computation is arguably the most important intellectual achievement of the past century, and it has led directly to the development of the computational infrastructure that surrounds us. The theory and the practice are interrelated in fascinating ways. Whether one thinks that the purpose of a college education is to prepare students for the workplace or to develop foundational knowledge with lifetime benefits (or both), computer science, in the 21st century, is fundamental.

So, we both agree that we want all students in higher education to take a course in computer science — but he doesn’t want that course to be a requirement. He explains:

When starting out at Princeton, I thought about lobbying for a computer science requirement and asked one of my senior colleagues in the physics department how we might encourage students to take a course. His response was this: “If you do a good course, they will come.” This wisdom applies in spades today. A well-designed computer science course can attract the vast majority of students at any college or university nowadays — in fact, there’s no need for a requirement.

Colleges and universities offer the opportunity for any student to take as many courses as they desire in math, history, English, psychology and almost any other discipline, taught by faculty members in that discipline. Students should have the same opportunity with computer science.

I have heard this argument before. Colleagues at Stanford have pointed out that most of Stanford undergraduates take their courses, without a requirement. Many colleagues have told me that a requirement would stifle motivation and would make students feel that we were forcing computer science down their throats. They would rather “attract” students (Sedgewick’s word above). I recognize that most students at the University of Michigan where I now work have the kind of freedom and opportunity that Sedgewick describes.

But I know many institutions and situations where Sedgewick’s description is simply not true. CS teachers can try attracting students, but they are only going to get them if it’s a required course.

  • In most Engineering programs with which I am familiar, students have relatively few electives. In most of the years that I was at Georgia Tech, Nuclear Engineering students had no elective hours — if they took even a single course out of the planned, lockstep four years, the degree would take them longer (and cost them more). Mechanical Engineering students had exactly one three credit hour course that they could choose over a four year program. These are programs where students are not allowed to take any course that they desire. Those two degree programs are extreme situations (but still exist at many institutions). Even here in Computer Science & Engineering at the University of Michigan, students are limited in the number of elective hours that they can take outside of CS and engineering classes. For many technical degrees, if you want students to take a particular subject, you have to convince the faculty who own that degree to include computer science in those requirements. At Georgia Tech, we had to sell a computer science requirement to the rest of campus. I tell that story here.
  • If students are paying by the course, then they take the courses that they need, not the ones that they might desire. I worked with Chattahoochee Technical College when I was in Georgia. They struggled to get students to even finish the requirements for a certificate. Students would take the small number of courses that helped them to gain the job skills that they needed. Completing courses that were merely interesting or recommended to them was simply an unnecessary expense. There are millions of students in US community colleges. If you really think that everyone should learn computer science, you have to think about reaching those millions of students, too.
  • Finally, there are the students who might end up loving computer science but, right now, they don’t desire it. First, they may simply not know what computer science is. In California (as an example), only 39% of high schools offer computer science, and only 3% of all California high school students take it (see link here). Alternatively, they may know what computer science is, but have already decided that it’s not for them. In my group’s research, we use expectancy-value theory often to explain student behavior — if students don’t think that they’ll succeed in computer science, or don’t see people like them belonging in computer science, then they won’t take it. That’s one reason for mandatory computer science in high schools — to give everyone the chance to discover a love in CS. So far, it’s not working in US high schools. I have argued that (see article here) we don’t know how to ramp up to that at the high school level yet. But we certainly could at the higher education level.

I understand Sedgewick’s argument for why he wants to offer the most compelling computer science course he can at Princeton, in order to attract students and motivate them to learn about computing. (I don’t agree with him that curated online videos are better than live lectures, but I think we mean something different by “lectures.”) I also understand why making that course a requirement might undermine his efforts to motivate and engage students. But that’s only a small percentage of students at the Princeton-like institutions in the US, and elsewhere. I’m sure he would agree that everyone deserves the opportunity to learn about computer science. His reasons why CS is important for Princeton students are valid for everyone. We are going to need different strategies to reach everyone. For some students, a requirement is the only way that we are going to make it available to them.

08 Dec 02:04

Father and Son

by peter@rukavina.net (Peter Rukavina)

I took this photo of me and and Oliver last week on the train home from Ontario. Its accidental fogginess proved an accurate reflection of how we were feeling at the time.

08 Dec 02:04

Express Mode for Apple Pay Now Available with Transport for London

by Ryan Christoffel

Benjamin Mayo, reporting for 9to5Mac:

If you are in London, you can now travel using Apple Pay on the Underground network without having to use Touch ID or Face ID authentication.

This means that you don’t need to hold up the queue at the turnstile when travelling the Underground. You can simply tap and go with either iPhone or Apple Watch.

Express Transit has been a feature since iOS 12.3 but it requires partnerships with the transit networks in order for the feature to work. Apple announced the new support for the London Underground today with a new web page and sending out notifications to iPhone and Apple Watch users in the United Kingdom.

One particularly interesting detail about Express Mode is that it enables a special Power Reserve state for your iPhone or Apple Watch. When your device is set up for Express Mode, and its battery is close to depletion, iOS and watchOS will automatically save a certain amount of power so you can still use your device for transit access for another five hours after Power Reserve kicks in. This will undoubtedly reduce a lot of anxiety for people who regularly deplete their battery, and would otherwise want to keep an alternate transit access method as a backup. Power Reserve is available on the iPhone XR and newer, and Apple Watch Series 4 and newer.

Even setting aside Power Reserve, the convenience of no longer needing to pre-authenticate with Express Mode is a significant user experience improvement for transit customers. The feature’s also available in my home of New York City, but only in an extremely limited rollout; I can’t wait for it to be part of my daily commute.

→ Source: 9to5mac.com

08 Dec 02:03

Ormidale Block – West Hastings Street

by ChangingCity

For decades this building has been wrongly attributed. The architect has never been in doubt; it was G W Grant, a designer with an eccentric architectural style. While many architects attempted to achieve perfect symmetry, Grant was quite happy to throw in a variety of design elements, in this case stacked up on top of each other on one side of the façade. There’s an oriel window on the top floor, over two floors with projecting bay windows, over the elaborate terra cotta entrance, with an arched doorway.

The Heritage Designation of the buildings says: “Built in 1900 by architect George W. Grant for R. W. Ormidale, the building housed the offices of several wholesale importers.” With a carved terracotta plaque saying “Ormidale RW 1900” above the oriel window, it seems an appropriate attribution, although quite who R W Ormidale might be was never made clear. There are no records of anybody in the city with that name, (or indeed, in Canada).

An entry in the Contract Record, and an illustration in the 1900 publication ‘Vancouver Architecturally’ solve the mystery. The brochure, which was a promotional booklet put together by several of the city’s architects, shows a sketch of the building, but identifies it as the ‘Walker Block’. In 1899 the Contract Record announced “G. W. Grant, architect, is preparing plans for a four-storey block, 48×120 feet, to be built on Hastings street by R. Walker.”

This confirms that the developer was a Mr. R Walker – hence the ‘R W’. A 1900 court case clarifies that the developer was Robert Walker. Mr. Walker signed off on a payment owed to a builder in G W Grant’s office, for the ‘Robert Walker Block’. There was a Robert Walker resident in the city, He had arrived around 1890, with his wife Susan. He was from the Isle of Man, and a carpenter, and she was born in Quebec. They were still in the city in 1901, living at 1921 Westminster Avenue, and he was still a carpenter, and the family had grown to four, with two children at home. He designed and built a house at the same address in 1904 for $650. In 1911 Robert had become Clerk of Works for the City of Vancouver. Home was now called 1921 Main Street, and the house must have been pretty full as the Census registered 22 lodgers, fifteen of them from the Isle of Man. It seemed unlikely that he had the funds to develop this building in 1900 on his carpenter’s wages, but he was the only person with the right name in the street directory.

However, an 1898 list of eligible voters identifies another Vancouver resident – Robert Walker, miner. He was living in the Pullman Hotel. (The list shows there were seven Robert Walkers in BC, including a blacksmith and a missionary). The adjacent building, the Flack Block, was developed in 1899 by another successful miner, so it’s quite possible that the Ormidale Block was developed with the proceeds of the Klondike gold rush, perhaps by someone with Scottish roots (if the Ormidale name is significant). We haven’t been able to confirm that hypothesis, in part because of Robert Gile Walker. While he’s unlikely to be our developer, he was successful in the Klondike. Between 1897 and 1901 he made five trips in search of gold. After his fifth and final exploration in Nome, Alaska, Walker returned to Tacoma, where he married in 1901 and continued to run a successful real estate business.

Our developer may have been Robert Henry Walker, and it’s possible that the Scottish connection was indirect. It may be a coincidence, but in 1885 Robert J Walker, a Fellow of the Royal Geographical Society, lived in a house called ‘Ormidale’, in Leicester, England. A Robert H Walker died in 1912, and was buried in the Masonic Section of Mountain View Cemetery, but we haven’t found anything to confirm he was the developer here..

The façade of the Ormidale block has recently been repaired and returned to its asymmetrical splendour, with a brand new office construction behind. It uses an unusual construction system; a hybrid structure consisting of an innovative wood-concrete composite floor system supported on steel beams and columns. This floor system allows for exposed wood ceilings throughout the office floors, a nod to the heritage aspect of this project, while achieving increased load-carrying capacity and stiffness by compositely connecting the concrete slab to the wood panels. At the back there’s an entirely contemporary skin consisting of rusted corten steel, and there’s also a green rooftop patio. Our top ‘before’ picture was taken in 2004, and the lower only five years ago, when the building was probably in its worst state. The bay windows had been removed many decades ago, although we’re not exactly sure when as the last image we have that shows them is from 1941.

0926

08 Dec 01:38

Twitter Favorites: [RLukeDuBois] i wish all technical documentation could be as beautifully experimental as early synthesizer documentation. https://t.co/6md06F6kIT

R. Luke DuBois @RLukeDuBois
i wish all technical documentation could be as beautifully experimental as early synthesizer documentation. pic.twitter.com/6md06F6kIT
08 Dec 01:27

On Abundance and Intermittance, Pt. I: Toilet Rolls

by Ton Zijlstra
Cornucopia
Abundance isn’t shipping containers full of stuff. (image by me, CC BY NC SA)

Last month I was at the Scifi Economics Lab, and Cory Doctorow
was one of the speakers. There was much to unpack in his talk, and he has a style of delivery that makes you want to quote a lot of things. I won’t give in to that urge, but will highlight one expression.

At some point he talked about abundance. It’s a term I’ve struggled with over the years because it’s so easy to interpret as having mountains of stuff, as per the image above. Or have everything free. A Dutch expression or rather admonition “we don’t live in the land where chickens fly into your mouth already fried” is probably an image our Calvinist culture associates with abundance: no work, but all the fruits of it. I have a sense of the meaning of abundance other than that, but never felt I had the right words to express that other perspective on abundance.

Doctorow’s metaphor for abundance was useful for me. He described back packers always having to carry a roll of toilet paper with them and that if not used it would desintegrate in your backpak, and therefore regularly needs replacement. Backpackers spent resources on replacing their toilet paper and spent mental energy on keeping an eye on still having it with them. A constant worry, and an inefficient use of resources (as you don’t use much of the toilet paper for its intended purpose, due to degradation).
Abundance then is being certain there is toilet paper when and where you need it. This is a qualitative metaphor that adds location, timing and actual need as dimensions of relevance. Abundance here is also more efficient, reduces worry, and is always there when needed. But it’s not limitless, free, or available anywhere for anything at any whim. It’s about qualitative abundance not quantitative abundance (‘heaps of free stuff’).

Hotel Room Toilet Paper Roll FoldMetaphorical Practical abundance, image by Tony Webster, license CC BY

This makes a vast number of things abundant in the society I live in, because it is there when I need it, without worry. Water, food, energy, clothing, transport, and everything else including toilet paper. (I once had a Central-Asian colleague who told me she thought, having visited, the Netherlands was totally boring because of that predictable abundance: no need to improvise anytime/anywhere.) Especially in the context of the six ways to die, abundance is an important notion, also because that abundance is often acquired by increasing the complexity of our systems. That complexity can break down.

Time, location and the context of an existing need are qualitative dimensions interesting to consider as design factors. What do you do when one or more of them are not to be counted on? Or can be counted upon, but at specific intervals? This is dealing with and designing for intermittence, as building block of both resilience and agency. That’s for another time.

05 Dec 03:31

Winterzeit ist Grünkohlzeit!

by Andrea

NDR Doku – Wie geht das? Grünkohl – Norddeutsches Supergemüse.

“Nach den ersten Nachtfrösten beginnt wieder die Grünkohlzeit. Vor allem in Norddeutschland gehört Grünkohl zu den beliebtesten Gemüsesorten. Er ist ein wahres Superfood, kalorienarm und reich an Eiweiß, Vitaminen und Ballaststoffen. Grünkohl gilt unter anderem als beste Gemüsesorte zur Vorsorge gegen eine Krebserkrankung und liegt damit noch vor dem Brokkoli.

“Früher hieß es, der Kohl wird erst durch den Frost süß. Aber die modernen Sorten haben sowieso schon nicht mehr so viele Bitterstoffe”, sagt Gottfried Gerken. Er ist der größte Anbauer in der Region Langförden. Hier, zwischen Vechta und Oldenburg, ist das Hauptanbaugebiet für das Kultgemüse.

Quasi gleich nebenan befindet sich auch der Grünkohlspezialist ELO-Frost. Fast alle namhaften Supermärkte und Restaurants beziehen ihren Grünkohl tiefgefroren von dem Unternehmen. Bis zu 70 Tonnen am Tag werden dort verarbeitet.

Mehr Vielfalt gibt es in Rhauderfehn. Nirgendwo in Niedersachsen gedeihen so viele Grünkohlsorten wie auf dem Acker von Reinhard Lühring. Seine Leidenschaft gilt vor allem den alten Sorten und deren Veredelung.

Und an der Universität Oldenburg gibt es am Institut für Biologie eine eigene Grünkohl-Forschungsabteilung.

Die Reportage aus der Reihe “Wie geht das?” zeigt den Weg des Grünkohls vom Acker bis zur Gefriertruhe und stellt Menschen vor, für die Grünkohl viel mehr bedeutet als eine Beilage zu Kassler und Pinkel.”

Als gebürtige Norddeutsche bin ich natürlich ein Fan von ganz klassischem Grünkohl mit Pinkel, aber es gibt natürlich noch viele andere Zubereitungsmöglichkeiten. Zum Beispiel:

BR Fernsehen – Unser Land: Rezept Wintergemüse: Grünkohl mal anders. “Es muss ja nicht immer Grünkohl mit Pinkel sein! Eine Nürnberger Landfrau zeigt uns mal uns ganz andere Gerichte”: Grünkohl-Quiche, Winter-Minestrone, Grünkohl-Smoothie und Asiatische Bratnudeln mit Grünkohl.

05 Dec 03:31

A Distributed Meeting Primer

by rands

As a leader who primarily values team health, I place great value on the weekly 1:1 because it’s where I assess health. It’s my highest bandwidth meeting, and historically the slight to significant lag omnipresent in video conferencing impeded that bandwidth. It stilted the conversation.

The micro-second of lag was an omnipresent reminder of the distance.
Combined with the incredibly predictable audio-video gymnastics that accompanied the start of these meetings with the equally predictable quip, “There has got to be a better way.”

In the past three years, my perception is video conferencing is a solved problem. The combination of mature networking infrastructure and well-designed software (I primarily use Zoom) has mostly eliminated the lag of video conference and significantly decreased A/V gymnastics.

We still have work to do.

Remote

Let’s start with the word: remote. Remote team. Remote worker. The definition means “situated far from the main centers of the population,” which in the context of the workplace is usually factually inaccurate. A remote team or human is simply a team or person who is not at headquarters, but it’s the first definition that most people mean when they say remote, and that’s the first problem.

Let’s start by agreeing to two ideas:

First: let’s call these humans teams distributed teams. Distributed is a boring word, but it is in that boringness that we solve one issue. Remote implies far from the center, whereas distributed means elsewhere.

Second: Let’s agree that no matter what we call the situation that the humans who are elsewhere are at a professional disadvantage.1 There is a communication, culture, and context tax applied to the folks who are distributed.2 Your job as a leader to actively invest in reducing that tax.

Good? Let’s start with fixing meetings.

The Many People Meeting

The use case I’m going to talk about is a complex and wasteful one: the many people meeting. Much of the following prescriptive tactical advice applies to the 1:1 meetings, but let’s focus where there is the most pain, the most cost, and the most room for improvement.

In the many people meeting, you have two locations: Host and Distributed. Host is where the majority of the humans are located, and Distributed are the humans on the various other ends of a video conference call.

I’ve already written about the rules for the type of meeting here. I’d suggest reading that article with the following challenge: how do we make this meeting the same experience so as to create the same amount of value for the Host and the Distributed?

My advice, which is both lived and collected via Twitter, falls into three categories: Pre-Meeting, During, and Post-Meeting.

Pre-Meeting

  • Don’t chintz on audio/video hardware and networking.3
  • As the Host, schedule meetings at X:05 or X:35 and get there at X:00 to make sure all technology is set up for a distributed meeting. Not only does this make sure the meeting starts on time, but it sends an important signal. How often have you had a meeting where seven minutes in someone asks, “Where’s Andy?” Well, Andy is distributed, and no one turned on the video camera. More importantly, Andy has been sitting in his home office for the last seven minutes wondering Did they forget me?
  • Set sensible defaults in your software. Default your microphone and audio to off when you enter a new meeting.
  • Check your background. Anything distracting behind you? Fix it.
  • Is the whiteboard in play? Great. Make sure it’s readable to distributed folks before the meeting.

During

  • Assign a Spotter on the Host side. This is the human responsible for paying attention to the distributed folks and looking for visual cues they are ready to speak.
  • Understand the acoustic attributes of a room. First time this particular set of humans are meeting in a distributed fashion? Do a microphone check for everyone right at the start. If there is horrible background noise on the Distributed-side, headphones are helpful.
  • Whoever is not speaking, hit mute. Microphones often capture more sound than you expect. Especially typing. Wait, who is typing? Same protocol as if everyone is in the same room. No laptops except for the note taker.
  • When focus shifts to the whiteboard, confirm that distributed folks can see the whiteboard. Another excellent job for the Spotter.

Post-Meeting

  • For a first time meeting with these humans or in this space, ask how it went for everyone. Fix things that are broken.
  • They can’t hear? Investing in fixing bad audio in conference rooms. Especially in large Host rooms, multiple microphones can capture the strangest set of sounds. At a prior gig, the board room had microphones built into the table. One Exec liked to click their pen during the meeting under the table. For Distributed folks, it was a deafening CLICK CLICK CLICK that the Host room couldn’t even hear.
  • Given the likelihood that the Distributed folks missed something during the meeting – which is a thing to be fixed – the distribution of the meeting notes are a critical feedback loop for everyone in the room.
  • The room with the most people disconnects last. Respect.4

A Fact of Working Life

Why are we having this meeting? If the answer is, “We’ve always had this meeting,” then that’s a different problem and another article. The answer is likely, “This set of humans needs to be together to achieve a thing, and that thing is better achieved with these humans together.”

Humans together. Not sitting on the other side of Slack or email, but together. Doing what humans do best: gathering context, arguing, listening, debating, whiteboarding, arguing some more, and eventually arriving at the informed decision. Maybe.

Is a meeting the right solution to your particular decision? I get that it’s our default power move, but do you really need a meeting? There appears to be a new tool or service launched every week designed for Distributed teams. Is there a different approach to getting to your decision that doesn’t involve a meeting? No? Ok…

Much of the leadership work I’ve done around Distributed teams are not resolving concerns about how the audio/visual works in a meeting; it’s how a Distributed team feels treated by Headquarters. It’s never one thing, it’s a long list of grievances that combine into an erroneous, but the very real perception that a Distributed team is somehow less important.

Much of the above advice is tactical. Simple acts to facilitate better communication, but the combination of all the advice supports a broader goal: by making sure every human in the meeting has equal access to the communication and the context, we send a clear message that being Distributed doesn’t matter. There is no measurable difference if you are in the Host room or Distributed.


  1. Disclaimer. For this article, I am using a distributed team scenario where there is a headquarters. There is a base of operations that contains a good chunk of the humans. There is a version of distributed where everyone in the company is distributed around the world. That is super interesting, but I’ve never experienced it. There’s a chance that the advice in this article is useful in all distributed, but buyer beware. 
  2. And there are distinct advantages, too. 
  3. Think about how much you’re paying the team and then think about how much it costs you to have them inefficiently communicate with each other on crappy infrastructure. 
  4. I’m confident I missed essential tips and tricks. Comment on this post, or join the Rands Leadership Slack and join #remote-work channel. No, I didn’t name it. 
05 Dec 03:31

A Different Kind of Transparency

by Sean Packham

When we announced the Librem 5 crowdfunding campaign we promised we would publish the Librem 5 hardware schematics when we ship. That promise is also rooted in our articles of incorporation to release schematics of any hardware we author. We’ve shipped the first Librem 5 phones from the Birch batch to backers and photos, videos and positive early impressions are being shared.

Librem 5 Birch Hardware Schematics

We are excited to share the hardware schematics for the Librem 5 Birch batch with you today.



You may be wondering why anyone would share their hardware schematics with the world? After all making a ground breaking open and freedom respecting phone is expensive and takes a long time. We are doing it because we believe in the freedom to choose hardware and software that treats you like a person and not a commodity to be exploited for profit.

We believe that you should have full ownership of your hardware, you shouldn’t have to essentially rent it from a company to be safe. While privacy and security are popular marketing terms these days, when many companies use those words they expect your complete and blind trust and reliance. While we believe you should trust us, we don’t require you to put blind trust in us. By publishing our schematics we give you the ability to verify that trust on your own (or with the help of someone else).

We’ve previously released hardware schematics for the Librem 5 devkits and now the Librem 5 Birch batch and will continue to share up-to-date specifications for future products and iterations. Why is this important for you even if you have no interest in looking at the specifications? Open hardware schematics allow anyone to audit, verify and contribute to more freedom respecting products. You shouldn’t have to blindly trust that any corporation has your best interests in mind.

X-Ray Images

In addition to us publishing our hardware schematics we are also sharing X-Ray scans of the components to empower anyone with access to the tools to be able compare their hardware to the reference and ensure no nefarious components have been added. By being completely transparent, we are showing you can trust us rather than just telling you. We are also giving you the tools to verify that trust.

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the people—stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Preorder now

The post A Different Kind of Transparency appeared first on Purism.

05 Dec 03:30

The Best Car Stereos With Apple CarPlay and Android Auto

by Rik Paul and Eric C. Evarts
The Best Car Stereos With Apple CarPlay and Android Auto

We’re convinced that the easiest and safest way to use your phone while driving is to connect it to a stereo that has Apple CarPlay or Android Auto. And after researching more than 75 models and testing 19, we found that the Pioneer AVH-W4500NEX is the best replacement car stereo for drivers who want those features. With wireless connectivity and an intuitive interface, this Pioneer model makes it easier than other stereos to stream music, navigate to a destination, and message by voice through your phone, while keeping distractions to a minimum.

05 Dec 03:30

Help Test Firefox’s built-in HTML Sanitizer to protect against UXSS bugs

by Frederik Braun

Help Test Firefox’s built-in HTML Sanitizer to protect against UXSS bugs

I recently gave a talk at OWASP Global AppSec in Amsterdam and summarized the presentation in a blog post about how to achieve “critical”-rated code execution vulnerabilities in Firefox with user-interface XSS. The end of that blog posts encourages the reader to participate the bug bounty program, but did not come with proper instructions. This blog post will describe the mitigations Firefox has in place to protect against XSS bugs and how to test them.

Our about: pages are privileged pages that control the browser (e.g., about:preferences, which contains Firefox settings). A successful XSS exploit has to bypass the Content Security Policy (CSP), which we have recently added but also our built-in XSS sanitizer to gain arbitrary code execution. A bypass of the sanitizer without a CSP bypass is in itself a severe-enough security bug and warrants a bounty, subject to the discretion of the Bounty Committee. See the bounty pages for more information, including how to submit findings.

How the Sanitizer works

The Sanitizer runs in the so-called “fragment parsing” step of innerHTML. In more detail, whenever someone uses innerHTML (or similar functionality that parses a string from JavaScript into HTML) the browser builds a DOM tree data structure. Before the newly parsed structure is appended to the existing DOM element our sanitizer intervenes. This step ensures that our sanitizer can not mismatch the result the actual parser would have created – because it is indeed the actual parser.

The line of code that triggers the sanitizer is in nsContentUtils::ParseFragmentHTML and nsContentUtils::ParseFragmentXML. This aforementioned link points to a specific source code revision, to make hotlinking easier. Please click the file name at the top of the page to get to the newest revision of the source code.

The sanitizer is implemented as an allow-list of elements, attributes and attribute values in nsTreeSanitizer.cpp. Please consult the allow-list before testing.

Finding a Sanitizer bypass is a hunt for Mutated XSS (mXSS) bugs in Firefox – Unless you find an element in our allow-list that has recently become capable of running script.

How and where to test

A browser is a complicated application which consists of millions of lines of code. If you want to find new security issues, you should test the latest development version. We often times rewrite lots of code that isn’t related to the issue you are testing but might still have a side-effect. To make sure your bug is actually going to affect end users, test Firefox Nightly. Otherwise, the issues you find in Beta or Release might have already been fixed in Nightly.

Sanitizer runs in all privileged pages

Some of Firefox’s internal pages have more privileges than regular web pages. For example about:config allows the user to modify advanced browser settings and hence relies on those expanded privileges.

Just open a new tab and navigate to about:config. Because it has access to privileged APIs it can not use innerHTML (and related functionality like outerHTML and so on) without going through the sanitizer.

Using Developer Tools to emulate a vulnerability

From about:config, open The developer tools console (Go to Tools in the menu bar. Select Web Developers, then Web Console (Ctrl+Shift+k)).

To emulate an XSS vulnerability, type this into the console:

document.body.innerHTML = '<img src=x onerror=alert(1)>'

Observe how Firefox sanitizes the HTML markup by looking at the error in the console:

“Removed unsafe attribute. Element: img. Attribute: onerror.”

You may now go and try other variants of XSS against this sanitizer. Again, try finding an mXSS bug or by identifying an allowed combination of element and attribute which execute script.

Finding an actual XSS vulnerability

Right, so for now we have emulated the Cross-Site Scripting (XSS) vulnerability by typing in innerHTML ourselves in the Web Console. That’s pretty much cheating. But as I said above: What we want to find are sanitizer bypasses. This is a call to test our mitigations.

But if you still want to find real XSS bugs in Firefox, I recommend you run some sort of smart static analysis on the Firefox JavaScript code. And by smart, I probably do not mean eslint-plugin-no-unsanitized.

Summary

This blog post described the mitigations Firefox has in place to protect against XSS bugs. These bugs can lead to remote code execution outside of the sandbox. We encourage the wider community to double check our work and look for omissions. This should be particularly interesting for people with a web security background, who want to learn more about browser security. Finding severe security bugs is very rewarding and we’re looking forward to getting some feedback. If you find something, please consult the Bug Bounty pages on how to report it.

The post Help Test Firefox’s built-in HTML Sanitizer to protect against UXSS bugs appeared first on Mozilla Security Blog.

05 Dec 03:30

XP as a Long-Term Learning Strategy

by Eugene Wallingford

I recently read Anne-Laure Le Cunff's Interleaving: Rethink The Way You Learn. Le Cunff explains why interleaving -- "the process of mixing the practice of several related skills together" -- is more effective for long-term learning than blocked practice, in which students practice a single skill until they learn it and then move on to the next skill. Interleaving forces the brain to retrieve different problem-solving strategies more frequently and under different circumstances, which reinforces neural connections and improves learning.

To illustrate the distinction between interleaving and blocked practice, Le Cunff uses this image:

interleaving versus blocked practice

When I saw that diagram, I thought immediately of Extreme Programming. In particular, I thought of a diagram I once saw that distinguished XP from more traditional ways of building software in terms of how quickly it moved through the steps of the development life cycle. That image looked something like this:

XP interleaves the stages of the software development life cycle

If design is good, why not do it all the time? If testing is good, why not do it all the time, too?

I don't think that the similarity between these two images is an accident. It reflects one of XP's most important, if sometimes underappreciated, benefits: By interleaving short spurts of analysis, design, implementation, and testing, programmers strengthen their understanding of both the problem and the evolving code base. They develop stronger long-term memory associations with all phases of the project. Improved learning enables them to perform even more effectively deeper in the project, when these associations are more firmly in place.

Le Cunff offers a caveat to interleaved learning that also applies to XP: "Because the way it works benefits mostly our long-term retention, interleaving doesn't have the best immediate results." The benefits of XP, including more effective learning, accrue to teams that persist. Teams new to XP are sometimes frustrated by the small steps and seemingly narrow focus of their decisions. With a bit more experience, they become comfortable with the rhythm of development and see that their code base is more supple. They also begin to benefit from the more effective learning that interleaved practice provides.

~~~~

Image 1: This image comes from Le Cunff's article, linked above. It is by Anne-Laure Le Cunff, copyright Ness Labs 2019, and reproduced here with permission.

Image 2: I don't remember where I saw the image I hold in my memory, and a quick search through Extreme Programming Explained and Google's archives did not turn up anything like it. So I made my own. It is licensed CC BY-SA 4.0.

05 Dec 03:29

Siri is winning here

by Volker Weber

Annotation 2019-12-03 000349

A picture says more than a thousand words.

I have one Echo Show which is still online, a couple of offline Google displays you can't talk to. Both are disconnected from Sonos and also my smart home components. No Cortana, no Bixby, no Facebook. But I use Siri all the time, on four HomePods, two iPads, two iPhones and two Apple Watches.

Read the full research >

05 Dec 03:27

Strongly Typed Events

Back in 2016, in Message Processing Styles, I was sort of gloomy and negative about the notion of automated mapping between messages on the wire and strongly-typed programming data structures. Since we just launched a Schema Registry, and it’s got my fingerprints on it, I guess I must have changed my mind.

Eventing lessons

I’ve been mixed up in EventBridge, formerly known as CloudWatch Events, since it was scratchings on a whiteboard. It has a huge number of customers, including but not limited to the hundreds of thousands that run Lambda functions, and the volume of events per second flowing through the main buses are keeping a sizeable engineering team busy. This has taught me a few things.

First of all, events are strongly subject to Hyrum's Law: With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody. Which is to say, once you’ve started shipping an event, it’s really hard, which is to say usually impossible, to change anything about it.

Second: Writing code to map back and forth between bits-on-the-wire and program data structures is a very bad use of developer time. Particularly when the messages on the wire are, as noted, very stable in practice.

Thus, the new schema registry. I’m not crazy about the name, because…

Schemas are boring

Nobody has ever read a message schema for pleasure, and very few for instruction. Among other things, most messages are in JSON, and I have repeatedly griped about the opacity and complexity of JSON Schema. So, why am I happy about the launch of a Schema Registry? Because it lets us do two useful things: Search and autocomplete.

Let’s talk about Autocomplete first. When I’m calling an API, I don’t have to remember the names of the events or their arguments, because my IDE does that for me. As of now, this is true for events as well; the IDE knows the names and types of the fields and sub-fields. This alone makes a schema registry useful. Or, to be precise, the code bindings and serializers were generate from the schema.

The search side is pretty simple. The schema registry is just a DynamoDB thing, nothing fancy about it. But we’ve wired up an ElasticSearch index so you can type random words at it to figure out which events have a field named “drama llama” or whatever else you need to deal with today.

Inference/Discovery

This is an absolutely necessary schema-registry feature that most people will never use. It turns out that writing schemas is a difficult and not terribly pleasant activity. Such activities should be automated, and the schema registry comes with a thing that looks at message streams and infers schemas for them. They told us we couldn’t call it an “Inferrer” because everyone thinks that means Machine Learning. So it’s called “schema discovery” and it’s not rocket science at all, people have been doing schema inference for years and there’s good open-source code out there.

So if you want to write a schema and jam it into the registry, go ahead. For most people, I think it’s going to be easier to send a large-enough sample of your messages and let the code do the work. At least it’ll get the commas in the right place. It turns out that if you don’t like the auto-generated schema, you can update it by hand; like I said, it’s just a simple database with versioning semantics.

Tricky bits

By which I mean, what could go wrong? Well, as I said above, events rarely change… except when they do. In particular, the JSON world tends to believe that you can always add a new field without breaking things. Which you can, until you’ve interposed strong types. This is a problem, but it has a good solution. When it comes to bits-on-the-wire protocols, there are essentially two philosophies: Must-Understand (receiving software should blow up if it sees anything unexpected in the data) and Must-Ignore (receiving software must tactfully ignore unexpected data in an incoming message). There are some classes of application where the content is so sensitive that Must-Understand is called for, but for the vast majority of Cloud-native apps, I’m pretty sure that Must-Ignore is a better choice.

Having said that, we probably need smooth support for both approaches. Let me make this concrete with an example. Suppose you’re a Java programmer writing a Lambda to process EC2 Instance State-change Notification events, and through the magic of the schema registry, you don’t have to parse the JSON, you just get handed an EC2InstanceStateChangeNotification object. So, what happens when EC2 decides to toss in a new field? There are three plausible options. First, throw an exception. Second, stick the extra data into some sort of Map<String, Object> structure. Third, just pretend the extra data wasn’t there. None of these are insane.

There’s another world out there where the bits-on-the-wire aren’t in JSON, they’re in a “binary” format like Avro or Protocol Buffers or whatever. In that world you really need schemas because unlike JSON, you just can’t process the data without one. In the specific (popular) case of Avro-on-Kafka, there’s a whole body of practice around “schema evolution”, where you can update schemas and automatically discover whether the change is backward-compatible for existing consumers. This sounds like something we should look at across the schemas space.

Tactical futures

Speaking of those binary formats, I absolutely do not believe that the current OpenAPI schema dialect is the be-all and end-all. Here’s a secret: The registry database has a SchemaType field and I’m absolutely sure that in future, it’s going to have more than one possible value.

Another to-do is supporting code bindings in languages other than the current Java, TypeScript, and Python. At the top of my list would be Go and C#, but I know there are members of other religions out there. And for the existing languages, we should make the integrations more native. For example, the Java bindings should be in Maven.

And of course, we need support in all the platform utilities: CloudFormation, SAM, CDK, Terraform, serverless.com, and any others that snuck in while I wasn’t looking.

Big futures

So, I seem to have had a change of worldview, from “JSON blobs on the wire are OK” to “It’s good to provide data types.” Having done that, everywhere I look around cloud-native apps I see things that deal with JSON blobs on the wire. Including a whole lot of AWS services. I’m beginning to think that more or less anything that deals with messages or events should have the option of viewing them as strongly-typed objects.

Which is going to be a whole lot of work, and not happen instantly. But as it says in Chapter 64 of the Dao De Jing: 千 里之行,始於足下 — “A journey of a thousand leagues begins with a single step”.

28 Nov 23:41

Mozilla and the Contract for the Web

by Mitchell Baker

Mozilla supports the Contract for the Web and the vision of the world it seeks to create. We participated in helping develop the content of the principles in the Contract. The result is language very much aligned with Mozilla, and including words that in many cases echo our Manifesto. Mozilla works to build momentum behind these ideas, as well as building products and programs that help make them real.

At the same time, we would like to see a clear method for accountability as part of the signatory process, particularly since some of the big tech platforms are high profile signatories. This gives more power to the commitment made by signatories to uphold the Contract about privacy, trust and ensuring the web supports the best in humanity.

We decided not to sign the Contract but would consider doing so if stronger accountability measures are added. In the meantime, we continue Mozilla’s work, which remains strongly aligned with the substance of the Contract.

The post Mozilla and the Contract for the Web appeared first on The Mozilla Blog.