Shared posts

31 Dec 01:14

2020 Outback Limited XT - First Impressions

by Michael Kalus

Introduction

2020 Outback Limited XT - First Impressions

The new 2020 Outback is my third Outback and I have to say the best. I ended up with the Outback back in 2012 after deciding to get a car again and found to my dismay that pretty much nobody in Canada is selling Wagons anymore. European manufacturers were the only exception to this, but they tend to fall into the luxury class and were way more than I wanted to spend. SUVs on the other hand often do not offer the internal room I was looking for. So I landed on the Outback in the end and as you can tell I really love my Outbacks.

2020 Outback Limited XT - First Impressions

Exterior Styling

For 2020 Subaru only slightly updated the look of the vehicle. Compared to my 2016 the car is a bit more “edgy”. The looks comes across as slightly more aggressive. It is a style I generally like, though I would not call the Outback a pretty car. It is, after all, primarily a tool for me to get to exciting places.

The biggest change is probably in the rear, where the rear lights now have little wings and look a bit like crab claws.

2020 Outback Limited XT - First Impressions

Mechanicals

Drivetrain

For this year Subaru has dropped the 3.6 H6 engine that my current 2016 still sports. Instead to the still existing 2.5l engine Subaru now offers a 2.4l turbo charged engine which is the engine in all the XT models. The move by Subaru came to improve fuel economy. My initial concern was that there would be turbo lag. The car is heavy and the reason I went from the 2.5l engine in my first car to the 3.6l in the second was exactly the lack of power when you wanted to get going.

2020 Outback Limited XT - First Impressions

Together with the new engine the car also got an improve CVT, which now has 8 virtual gears instead of the previous six. Unlike the previous version on my car stomping on the gas now has the car actually go. In the old 3.6l one the CVT struggled figuring out which ratio it wanted to be in and jumped around a bit initially, which lead to a bit of a rough starts.

With the new engine though there is plenty of torque from the go and the CVT is much better in translating that to the wheels. If you do want to start off hard the turbo kicks in at around 2000rpm and when I mean “kicks in” it really does kick the car forward. It suddenly wants to go. If you are a bit more moderate with the gas pedal you get a very smooth and fast acceleration. It is actually quite impressive how smooth and fast the car accelerates, almost without any felt effort.

Fuel Economy

On the around 600km I have driven the car so far the fuel economy does appear to be quite a bit improved from the previous 3.6l engine, be it in the city or on the highway, the engine in general can muster the torque at a lower RP< and it’s Arely necessary to really “hit it” in order make the car go. I will provide an update in the future as to the real world experience with the gas. But right now the car shows an 8l/100km fuel consumption. Of which roughly 100km were done in the city and 400 on the highway.

Interior

2020 Outback Limited XT - First Impressions

Coming from the 2016 the interior was already nice, but with the 2020 Subaru has gone a bit further. It’s the same trim level so I can draw direct comparisons and the one thing that sticks out is much more leather in the car. Doors and dashboard now have leather trim in combination with soft grip plastic.

The stitching on the leather is nice and the seats are now a bit more supportive which makes fast cornering a bit easier to handle. The centre console also comes in higher now and provides additional support.

2020 Outback Limited XT - First Impressions

In addition to a large storage area in the centre console that easily swallows an iPhone XS Max and provides two USB ports and an AUX in there is a small side pocket on the passenger side that can hold the phone. There are also two additional USB outlets in the back for the rear passengers.

Little Gripes

Not all is improved though. In the centre console above the large screen infotainment system there is a somewhat badly fit piece of plastic. In the highest trim model it contains a face sensor that is used to identify drivers and alert you if you’re about to doze off. Unfortunately, while the rest of the top of the dashboard is done in soft touch plastic, this section is hard plastic and you can clearly see seems where the part was fitted. It’s a shame that Subaru did not create a full dashboard without the plastic insert.

In addition storage in the doors is less than it was before, the pockets are now narrower than in the old one and the centre console lost the coin carrier and in general seems to have shrunk in size.

I am also not convinced about the black piano finish around the shifter and the leather wrap around the cup holders. I suspect the former will scratch and the leather will show spots when you end up spilling any of your drink.

2020 Outback Limited XT - First Impressions

The Infotainment System

Subaru has taken a huge leap forward with the infotainment system. It’s now an 11” touch screen in the centre console. The system supports Apple’s CarPlay as well as Android Auto. Unfortunately for CarPlay only the upper part of the screen is being used. This seems to be a limitation coming from Apple and I do hope that they will eventually fix this. I also had CarPlay once just suddenly crash on me. I had to unplug the USB cable and replug it to get it back. And yes, there is no Wireless CarPlay in the Subaru.

Unfortunately Subaru has decided that touch screens are the future and so pretty much any functionality in the car has to be controlled through the screen. You can turn on and off the defrosters via physical buttons and adjust the cabine temperature, both for driver and passenger, directly via buttons. But anything else requires you to jump through menu screens.

The good news here is that the system is rather responsive, the bad news is that some of the functions are buried deep in the menus and it can require a bit of searching before you find the right setting, not great.

Weird settings

There are two settings I have found so far that are weird.

The first one is the setting for the automatic start/stop for the engine. Many people seem to hate it, I personally like the idea of not idling the car and Subaru tries to tell you how much fuel you have saved by turning on a little info display in the dashboard that shows you how long the car has been off as well how much fuel you have saved so far. There is a menu option that allows you to turn this feature off, but for reasons only known to Subaru the car does not remember this. Next time you start the car it’s on again and you have to go through the menus to turn it off again. The same is true for “Automatic Vehicle Hold”. This feature has the car apply and hold the brakes for you until you either hit the gas or the brake pedal again. It’s a nice feature to have and an evolution of the previous “hill hold”.

The only option that Subaru directly exposes to you via the screen is X-Mode. You can turn this on directly in the top info bar on the screen.

Driving the car

As mention above the car’s new drive train is good and sprite, and the same is true for how the car drives overall. Subaru claims a 70% improvement in chassis stiffness and although I can’t really measure it, the car definitely is much more eager to go around corners with even less “kneeling” and now feels even more like it is on rails.

The whole experience makes one appreciated the improved support provided by the new seats

Safety Features.

EyeSight

Unlike most other car manufacturers Subaru does not use RADAR or LIDAR for for their driving assist features. Instead a stereo pair of cameras looks forward at the road combined with blind spot and rear sensors.

In the past the EyeSight system “failed” for me when driving directly towards the low sun. In the driving I have done to date I am getting the impression that Subaru has improved the system. I have driven straight into the sun but the system was not turning off. I will keep testing this a bit more.

Another thing I noticed is that the new system seems to have a wider view angle. In the past going up and down the Sea-to-Sky, a road with. Many twisty turns, the system often lost track of the car ahead, which then resulted in the vehicle accelerating only to brake again once it reacquired the vehicle. This is no longer the case, making for a much smoother driving experience.

Lane Keep Assist

My 2016 already had a lane keep assistant feature, something I quickly turned off as it often misfired when the road markings weren’t great or snow was covering some of them up.

Like with the EyeSight the system seems to have been improved and according to Subaru can keep you now in the lane at much lower speeds than before.

So far so good, unfortunately the lane keep assist is utterly annoying. Not so much the alerting that you leave the lane, but if you turn the “active” part on where the car tries to keep you at the centre lane you end up fighting it quite a bit. What makes it worse, the “are the hands on the wheel” detection seems to be driven by exactly that fight. I had my hands on the wheel as I wanted to see if it could follow the road. What I did not do though was try to resist the car’s steering input and immediately the car started beeping at me that I needed to have my hands on the steering wheel.

Trying to drive normally with it on feels like you are constantly fighting the car, it has a very specific idea on where it wants to be and if you’re not there it will try to get you there. So this is off for me. Which is unfortunate as the tech def. works and is cool.

2020 Outback Limited XT - First Impressions

Rearview Camera

The rearview camera continues to be a mixture of great and utterly horribly. The horrible part comes from bad weather driving. There is no washer for the camera and it’s mounted in a way that spray from the road covers up the lens. So you can find yourself with a totally covered lens and can see absolutely nothing. It also seems to be worse with the new rear-end than it was on the previous car. I several times now had the lens completely covered up with zero visibility.

On the plus side, where in the past you merely got a box that showed you when you were getting close to anything, the new camera now also shows you the turning radius of the car and where it’s going. This is a nice feature and makes life a bit easier (if the camera isn’t dirty / blocked).

Subaru also added automatic braking to the car when you are reversing. Again, a useful feature, especially when the camera is completely crudded up.

Initial verdict

After close to 600km I have to say I am happy with the upgrade the new Outback brings. It is still a fun car to drive, it has a nicely improved interior and continues to be as practical as always.

The move the large screen for pretty much all of the function is a bit of a double edged sword and I am not yet sure how I feel about it. I guess it could have been both better and worse. So, a neutral?

Overall though, the 2020 Subaru Outback Limited XT is a fun and useful car to drive, continuing a success story for the Outback that began 25 years ago.

I will provide an update in around six months after I had some more experience with the car.

31 Dec 01:13

Looking Back On 2019, the Annual ‘Tadaa’ List

by Ton Zijlstra

It’s the end of December, and we’re about to enjoy the company of dear friends to bring in the new year, as is our usual tradition. This means it is time for my annual year in review posting, the ‘Tadaa!’ list.

Nine years ago I started writing end-of-year blogposts listing the things that happened that year that gave me a feeling of accomplishment, that make me say ‘Tadaa!’, so this is the tenth edition (See the 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011 and 2010 editions). I am usually moving forwards to the next thing as soon as something is finished, and that often means I forget to celebrate or even acknowledge things during the year. Sometimes I forget things completely (a few years ago I completely forgot I organised a national level conference at the end of a project). My sense of awareness has improved in the past few years, especially since I posted week notes for the past 18 months. Still it remains a good way to reflect on the past 12 months and list the things that gave me a sense of accomplishment. So, here’s this year’s Tadaa!-list:

  • Visiting Open Knowledge Belgium to present the open data impact measurement framework I developed as part of an assignment for the UNDP in 2018. The way I accommodate in it for different levels of maturity on both the provision and demand side of open data and look at both lead and lag indicators, allows the entire framework to be a sensor: you should see the impact of actions propagate through indicators on subsequent levels. This allows you to look backwards and forwards with the framework, providing a sense of direction and speed as well as of current status. I’m currently deploying those notions with a client organisation for more balanced and ethical measurement and data collection.
  • When my project portfolio stabilised on a few bigger things, not a range of smaller things, I felt restless at first (there should be more chaos around me!), but I slowly recognised it as an opportunity to read, learn, and do more of the stuff on my endless backlog
  • Those few bigger things allow me to more deeply understand client organisations I do them in, and see more of my work and input evolve into results within an organisation. The clients involved seem to be very happy with the results so far, and I actually heard and accepted their positive feedback. Normally I’d dismiss such compliments.
  • Found a more stable footing for my company and in working/balancing with the other partners. We now are in a much better place than last year. Organisationally, as a team, and financially
    20191210_154337 20190417_153655
  • We opened up offices in Utrecht for my company, meaning we now have space available to host people and events. We used some of that new opportunity, organising a few meet-ups, an unconference and hosting the Open Nederland general assembly meeting, but it is something I’d like to do more of. Set a rhythm in making our offices a hub in our network more.

    20190502_181325 20190502_181320

  • Got to be there for friends, and friends got to be there for me. Thank you.
  • Visited Peter, Catherine and Oliver on PEI for the Crafting {:} a Life unconference. The importance of spending time together in unhurried conversations can’t be overestimated.

    What happened to blogging? 20190608_084935

  • Gave a keynote at Coder Dojo NL conference. It turned out to be a more human and less abstract version of my Networked Agency keynote at SOTN in 2018. Helping me to better phrase my own thoughts on how technology, agency and being human interplays.
  • Organised 2 IndieWebCamps with Frank Meeuwsen, basically bringing the IndieWeb to the Netherlands. I enjoyed working with Frank, after having been out of touch for a while. Meeting over dinner at Ewout’s early last year, blogging about independent web technology, Elmine’s birthday unconference and visiting an IndieWebCamp in Germany together all in 2018, reconnected us, leading to organising two successful events in both Utrecht and Amsterdam, putting two new cities on the IndieWeb map.

    IWC Utrecht group pic IndieWebCamp Amsterdam 2019

  • Kept up the blogging (for the 17th year), making my site(s) even more central to the way I process and share info by doing things like syndicating to Twitter and Mastodon from my site, and not treating Twitter as a place where I write original content.
  • Enjoying every day still how much more central in the country we now live, how so many more things are now within easy reach. Events I can visit in the evening in Amsterdam, Utrecht, Rotterdam or The Hague, without the need to book a hotel, because I can be back home within an hour. How it allows us to let Y experience she’s part of a wider family, because it’s now so much easier to spend time with E’s brothers and cousins and my sisters. How comfortable our house is, and how I enjoy spending time and working in our garden.
  • Celebrated the 50th birthday of a dear friend. We all go back at least 25 years, from when we were all at university, and room mates in various constellations. M said she felt privileged to have all of us around the table that night, that all of us responded to her invitation. She’s right, and all of us realised it, it is a privilege. The combination of making the effort to hang out together, and doing that consistently over many years creates value and depth and a sense of connectedness by itself. Regardless of what happened and happens to any of us, that always stands.
  • Finally attended Techfestival, for its third edition, having had to decline the invitations to the previous two. Was there to get inspired, take the pulse of the European tech scene, and as part of the Copenhagen 150 helped created the Techpledge. Participating in that process gave me a few insights into my own role and motivations in the development and use of technology.
    20190906_150501 20190907_120354
  • Getting into an operational rhythm with the new director and me in my role as the chairman of the Open State Foundation. Working in that role opened up my mind again to notions about openness and good governance that I lost track of a bit focussing on the commercial work I do in this area with my company. It rekindles the activist side of me more again.
  • Working with my Open NL colleagues, yet another angle of open content, seen from the licensing perspective. Enjoyed giving a presentation on Creative Commons in Leeuwarden as part of the Open Access Week events organised by the local public and higher education libraries in that city.
  • Visited some conferences without having an active contribution to the program. It felt like a luxury to just dip in and out of sessions and talks on a whim.
  • Finding a bit more mental space and time to dive deeper into some topics. Such as ethics with regard to data collection and usage, information hygiene & security, AI and distributed technologies
  • Worked in Belgium, Denmark, Canada and Germany, which together amounts to the smallest amount of yearly travel I have done in this last decade. Travel is a habit Bryan said to me a few years back, and it’s true. I felt the withdrawal symptoms this year. I missed travel, I need it, and as a result especially enjoyed my trips to both Denmark and Canada. In the coming year there should be an opportunity to work in SE Asia again, and I’m on the lookout for more activities across the EU member states.
  • Presented in Germany, in German for this first time since years. Again something I’d like to do more of, although I find it difficult to create opportunities to work there. The event opened my eyes to the totally different level of digitisation in Germany. There’s a world to gain there, and there should be opportunities in contributing to that.
  • Hosted an unconference at the Saxion University of Applied Sciences in Enschede, in celebration of the 15th anniversary of the industrial design department. Its head, Karin van Beurden asked me to do this as she had experienced our birthday unconferences and thought it a great way to celebrate something in a way that is intellectually challenging and has a bite to it. This year saw a rise in unconferences I organised, facilitated or attended (7), and I find there’s an entire post-BarCamp generation completely unfamiliar with the concept. Fully intend to do more of this next year, as part of the community efforts of my company. We did one on our office roof top this year, but I really want this to become a series
    20191209_212304
  • Spent a lot of time (every Friday) with Y, and (on weekends) with the three of us. Y is at an age where her action radius is growing, and the type of activities we can undertake have more substance to them. I love how her observational skills and mind work, and the types of questions she is now asking.
    20190630_124904 20191025_121029
  • Taking opportunities to visit exhibits when they arise. Allowing myself the 60 or so minutes to explore. Like when I visited the Chihuly exhibit in Groningen when I was in the city for an appointment and happened to walk past the museum.
    20191116_113816 20190205_152946

This post is not about it, but I have tangible notions about what I want to do and focus on in the coming months, more than I had a year ago. Part of that is what I learned from the things above that gave me a sense of accomplishment. Part of that is the realisation E and I need to better stimulate and reinforce each others professional activities. That is a good thing too.

In 2019 I worked 1756 hours, which is about 36 hours per week worked. This is above my actual 4 day work week, and I still aim to reduce it further, but it’s stable compared to 2016-2018, which is a good thing. Especially considering it was well over 2400 in 2011 and higher before.

I read 48 books, less than one a week, but including a handful of non-fiction, and nicely evenly spread out over the year, not in bursts. I did not succeed in reading significantly more non-fiction, although I did buy quite a number of books. So there’s a significant stack waiting for me. Just as there is a range of fiction works still waiting for my attention. I don’t think I need to buy more books in the coming 4 months or 6 even, but I will have to learn to keep the bed side lamp on longer as I have a surprising number of paper books waiting for me after years of e-books only.

We’ll see off the year in the company of dear friends in the Swiss mountainside, and return early 2020. Onwards!

31 Dec 01:12

Can We Build Trustable Hardware?

by bunnie

Why Open Hardware on Its Own Doesn’t Solve the Trust Problem

A few years ago, Sean ‘xobs’ Cross and I built an open-source laptop, Novena, from the circuit boards up, and shared our designs with the world. I’m a strong proponent of open hardware, because sharing knowledge is sharing power. One thing we didn’t anticipate was how much the press wanted to frame our open hardware adventure as a more trustable computer. If anything, the process of building Novena made me acutely aware of how little we could trust anything. As we vetted each part for openness and documentation, it became clear that you can’t boot any modern computer without several closed-source firmware blobs running between power-on and the first instruction of your code. Critics on the Internet suggested we should have built our own CPU and SSD if we really wanted to make something we could trust.

I chewed on that suggestion quite a bit. I used to be in the chip business, so the idea of building an open-source SoC from the ground-up wasn’t so crazy. However, the more I thought about it, the more I realized that this, too was short-sighted. In the process of making chips, I’ve also edited masks for chips; chips are surprisingly malleable, even post tape-out. I’ve also spent a decade wrangling supply chains, dealing with fakes, shoddy workmanship, undisclosed part substitutions – there are so many opportunities and motivations to swap out “good” chips for “bad” ones. Even if a factory could push out a perfectly vetted computer, you’ve got couriers, customs officials, and warehouse workers who can tamper the machine before it reaches the user. Finally, with today’s highly integrated e-commerce systems, injecting malicious hardware into the supply chain can be as easy as buying a product, tampering with it, packaging it into its original box and returning it to the seller so that it can be passed on to an unsuspecting victim.

If you want to learn more about tampering with hardware, check out my presentation at Bluehat.il 2019.

Based on these experiences, I’ve concluded that open hardware is precisely as trustworthy as closed hardware. Which is to say, I have no inherent reason to trust either at all. While open hardware has the opportunity to empower users to innovate and embody a more correct and transparent design intent than closed hardware, at the end of the day any hardware of sufficient complexity is not practical to verify, whether open or closed. Even if we published the complete mask set for a modern billion-transistor CPU, this “source code” is meaningless without a practical method to verify an equivalence between the mask set and the chip in your possession down to a near-atomic level without simultaneously destroying the CPU.

So why, then, is it that we feel we can trust open source software more than closed source software? After all, the Linux kernel is pushing over 25 million lines of code, and its list of contributors include corporations not typically associated with words like “privacy” or “trust”.

The key, it turns out, is that software has a mechanism for the near-perfect transfer of trust, allowing users to delegate the hard task of auditing programs to experts, and having that effort be translated to the user’s own copy of the program with mathematical precision. Thanks to this, we don’t have to worry about the “supply chain” for our programs; we don’t have to trust the cloud to trust our software.

Software developers manage source code using tools such as Git (above, cloud on left), which use Merkle trees to track changes. These hash trees link code to their development history, making it difficult to surreptitiously insert malicious code after it has been reviewed. Builds are then hashed and signed (above, key in the middle-top), and projects that support reproducible builds enable any third-party auditor to download, build, and confirm (above, green check marks) that the program a user is downloading matches the intent of the developers.

There’s a lot going on in the previous paragraph, but the key take-away is that the trust transfer mechanism in software relies on a thing called a “hash”. If you already know what a hash is, you can skip the next paragraph; otherwise read on.

A hash turns an arbitrarily large file into a much shorter set of symbols: for example, the file on the left is turned into “🐱🐭🐼🐻” (cat-mouse-panda-bear). These symbols have two important properties: even the tiniest change in the original file leads to an enormous change in the shorter set of symbols; and knowledge of the shorter set of symbols tells you virtually nothing about the original file. It’s the first property that really matters for the transfer of trust: basically, a hash is a quick and reliable way to identify small changes in large sets of data. As an example, the file on the right has one digit changed — can you find it? — but the hash has dramatically changed into “🍑🐍🍕🍪” (peach-snake-pizza-cookie).

Because computer source code is also just a string of 1’s and 0’s, we can also use hash functions on computer source code, too. This allows us to quickly spot changes in code bases. When multiple developers work together, every contribution gets hashed with the previous contribution’s hashes, creating a tree of hashes. Any attempt to rewrite a contribution after it’s been committed to the tree is going to change the hash of everything from that point forward.

This is why we don’t have to review every one of the 25+ million lines of source inside the Linux kernel individually – we can trust a team of experts to review the code and sleep well knowing that their knowledge and expertise can be transferred into the exact copy of the program running on our very own computers, thanks to the power of hashing.

Because hashes are easy to compute, programs can be verified right before they are run. This is known as closing the “Time-of-Check vs Time-of-Use” (TOCTOU) gap. The smaller the gap between when the program is checked versus when it is run, the less opportunity there is for malicious actors to tamper with the code.

Now consider the analogous picture for open source in the context of hardware, shown above. If it looks complicated, that’s because it is: there are a lot of hands that touch your hardware before it gets to you!

Git can ensure that the original design files haven’t been tampered with, and openness can help ensure that a “best effort” has been made to build and test a device that is trustworthy. However, there are still numerous actors in the supply chain that can tamper with the hardware, and there is no “hardware hash function” that enables us to draw an equivalence between the intent of the developer, and the exact instance of hardware in any user’s possession. The best we can do to check a modern silicon chip is to destructively digest and delayer it for inspection in a SEM, or employ a building-sized microscope to perform ptychographic imaging.

It’s like the Heisenberg Uncertainty Principle, but for hardware: you can’t simultaneously be sure of a computer’s construction without disturbing its function. In other words, for hardware the time of check is decoupled from the time of use, creating opportunities for tampering by malicious actors.

Of course, we entirely rely upon hardware to faithfully compute the hashes and signatures necessary for the perfect trust transfer of trust in software. Tamper with the hardware, and all of a sudden all these clever maths are for naught: a malicious piece of hardware could forge the results of a hash computation, thus allowing bad code to appear identical to good code.

Three Principles for Building Trustable Hardware

So where does this leave us? Do we throw up our hands in despair? Is there any solution to the hardware verification problem?

I’ve pondered this problem for many years, and distilled my thoughts into three core principles:

1. Complexity is the enemy of verification. Without tools like hashes, Merkel trees and digital signatures to transfer trust between developers and users, we are left in a situation where we are reduced to relying on our own two eyes to assess the correct construction of our hardware. Using tools and apps to automate verification merely shifts the trust problem, as one can only trust the result of a verification tool if the tool itself can be verified. Thus, there is an exponential spiral in the cost and difficulty to verify a piece of hardware the further we drift from relying on our innate human senses. Ideally, the hardware is either trivially verifiable by a non-technical user, or with the technical help of a “trustable” acquaintance, e.g. someone within two degrees of separation in the social network.

2. Verify entire systems, not just components. Verifying the CPU does little good when the keyboard and display contain backdoors. Thus, our perimeter of verification must extend from the point of user interface all the way down to the silicon that carries out the secret computations. While open source secure chip efforts such as Keystone and OpenTitan are laudable and valuable elements of a trustable hardware ecosystem, they are ultimately insufficient by themselves for protecting a user’s private matters.

3. Empower end-users to verify and seal their hardware. Delegating verification and key generation to a central authority leaves users exposed to a wide range of supply chain attacks. Therefore, end users require sufficient documentation to verify that their hardware is correctly constructed. Once verified and provisioned with keys, the hardware also needs to be sealed, so that users do not need to conduct an exhaustive re-verification every time the device happens to leave their immediate person. In general, the better the seal, the longer the device may be left unattended without risk of secret material being physically extracted.

Unfortunately, the first and second principles conspire against everything we have come to expect of electronics and computers today. Since their inception, computer makers have been in an arms race to pack more features and more complexity into ever smaller packages. As a result, it is practically impossible to verify modern hardware, whether open or closed source. Instead, if trustworthiness is the top priority, one must pick a limited set of functions, and design the minimum viable verifiable product around that.

The Simplicity of Betrusted

In order to ground the conversation in something concrete, we (Sean ‘xobs’ Cross, Tom Mable, and I) have started a project called “Betrusted” that aims to translate these principles into a practically verifiable, and thus trustable, device. In line with the first principle, we simplify the device by limiting its function to secure text and voice chat, second-factor authentication, and the storage of digital currency.

This means Betrusted can’t browse the web; it has no “app store”; it won’t hail rides for you; and it can’t help you navigate a city. However, it will be able to keep your private conversations private, give you a solid second factor for authentication, and perhaps provide a safe spot to store digital currency.

In line with the second principle, we have curated a set of peripherals for Betrusted that extend the perimeter of trust to the user’s eyes and fingertips. This sets Betrusted apart from open source chip-only secure enclave projects.

Verifiable I/O

For example, the input surface for Betrusted is a physical keyboard. Physical keyboards have the benefit of being made of nothing but switches and wires, and are thus trivial to verify.

Betrusted’s keyboard is designed to be pulled out and inspected by simply holding it up to a light, and we support different languages by allowing users to change out the keyboard membrane.

The output surface for Betrusted is a black and white LCD with a high pixel density of 200ppi, approaching the performance of ePaper or print media, and is likely sufficient for most text chat, authentication, and banking applications. This display’s on-glass circuits are entirely constructed of transistors large enough to be 100% inspected using a bright light and a USB microscope. Below is an example of what one region of the display looks like through such a microscope at 50x magnification.

The meta-point about the simplicity of this display’s construction is that there are few places to hide effective back doors. This display is more trustable not just because we can observe every transistor; more importantly, we probably don’t have to, as there just aren’t enough transistors available to mount an attack.

Contrast this to the more sophisticated color displays, which rely on a fleck of silicon with millions of transistors implementing a frame buffer and command interface, and this controller chip is closed-source. Even if such a chip were open, verification would require a destructive method involving delayering and a SEM. Thus, the inspectability and simplicity of the LCD used in Betrusted is fairly unique in the world of displays.

Verifiable CPU

The CPU is, of course, the most problematic piece. I’ve put some thought into methods for the non-destructive inspection of chips. While it may be possible, I estimate it would cost tens of millions of dollars and a couple years to execute a proof of concept system. Unfortunately, funding such an effort would entail chasing venture capital, which would probably lead to a solution that’s closed-source. While this may be an opportunity to get rich selling services and licensing patented technology to governments and corporations, I am concerned that it may not effectively empower everyday people.

The TL;DR is that the near-term compromise solution is to use an FPGA. We rely on logic placement randomization to mitigate the threat of fixed silicon backdoors, and we rely on bitstream introspection to facilitate trust transfer from designers to user. If you don’t care about the technical details, skip to the next section.

The FPGA we plan to use for Betrusted’s CPU is the Spartan-7 FPGA from Xilinx’s “7-Series”, because its -1L model bests the Lattice ECP5 FPGA by a factor of 2-4x in power consumption. This is the difference between an “all-day” battery life for the Betrusted device, versus a “dead by noon” scenario. The downside of this approach is that the Spartan-7 FPGA is a closed source piece of silicon that currently relies on a proprietary compiler. However, there have been some compelling developments that help mitigate the threat of malicious implants or modifications within the silicon or FPGA toolchain. These are:

• The Symbiflow project is developing a F/OSS toolchain for 7-Series FPGA development, which may eventually eliminate any dependence upon opaque vendor toolchains to compile code for the devices.
Prjxray is documenting the bitstream format for 7-Series FPGAs. The results of this work-in-progress indicate that even if we can’t understand exactly what every bit does, we can at least detect novel features being activated. That is, the activation of a previously undisclosed back door or feature of the FPGA would not go unnoticed.
• The placement of logic with an FPGA can be trivially randomized by incorporating a random seed in the source code. This means it is not practically useful for an adversary to backdoor a few logic cells within an FPGA. A broadly effective silicon-level attack on an FPGA would lead to gross size changes in the silicon die that can be readily quantified non-destructively through X-rays. The efficacy of this mitigation is analogous to ASLR: it’s not bulletproof, but it’s cheap to execute with a significant payout in complicating potential attacks.

The ability to inspect compiled bitstreams in particular brings the CPU problem back to a software-like situation, where we can effectively transfer elements of trust from designers to the hardware level using mathematical tools. Thus, while detailed verification of an FPGA’s construction at the transistor-level is impractical (but still probably easier than a general-purpose CPU due to its regular structure), the combination of the FPGA’s non-determinism in logic and routing placement, new tools that will enable bitstream inspection, and the prospect of 100% F/OSS solutions to compile designs significantly raises the bar for trust transfer and verification of an FPGA-based CPU.


Above: a highlighted signal within an FPGA design tool, illustrating the notion that design intent can be correlated to hardware blocks within an FPGA.

One may argue that in fact, FPGAs may be the gold standard for verifiable and trustworthy hardware until a viable non-destructive method is developed for the verification of custom silicon. After all, even if the mask-level design for a chip is open sourced, how is one to divine that the chip in their possession faithfully implements every design feature?

The system described so far touches upon the first principle of simplicity, and the second principle of UI-to-silicon verification. It turns out that the 7-Series FPGA may also be able to meet the third principle, user-sealing of devices after inspection and acceptance.

Sealing Secrets within Betrusted

Transparency is great for verification, but users also need to be able to seal the hardware to protect their secrets. In an ideal work flow, users would:

1. Receive a Betrusted device

2. Confirm its correct construction through a combination of visual inspection and FPGA bitstream randomization and introspection, and

3. Provision their Betrusted device with secret keys and seal it.

Ideally, the keys are generated entirely within the Betrusted device itself, and once sealed it should be “difficult” for an adversary with direct physical possession of the device to extract or tamper with these keys.

We believe key generation and self-sealing should be achievable with a 7-series Xilinx device. This is made possible in part by leveraging the bitstream encryption features built into the FPGA hardware by Xilinx. At the time of writing, we are fairly close to understanding enough of the encryption formats and fuse burning mechanisms to provide a fully self-hosted, F/OSS solution for key generation and sealing.

As for how good the seal is, the answer is a bit technical. The TL;DR is that it should not be possible for someone to borrow a Betrusted device for a few hours and extract the keys, and any attempt to do so should leave the hardware permanently altered in obvious ways. The more nuanced answer is that the 7-series devices from Xilinx are quite popular, and have received extensive scrutiny over its lifetime by the broader security community. The best known attacks against the 256-bit CBC AES + SHA-256 HMAC used in these devices leverages hardware side channels to leak information between AES rounds. This attack requires unfettered access to the hardware and about 24 hours to collect data from 1.6 million chosen ciphertexts. While improvement is desirable, keep in mind that a decap-and-image operation to extract keys via physical inspection using a FIB takes around the same amount of time to execute. In other words, the absolute limit on how much one can protect secrets within hardware is probably driven more by physical tamper resistance measures than strictly cryptographic measures.

Furthermore, now that the principle of the side-channel attack has been disclosed, we can apply simple mitigations to frustrate this attack, such as gluing shut or removing the external configuration and debug interfaces necessary to present chosen ciphertexts to the FPGA. Users can also opt to use volatile SRAM-based encryption keys, which are immediately lost upon interruption of battery power, making attempts to remove the FPGA or modify the circuit board significantly riskier. This of course comes at the expense of accidental loss of the key should backup power be interrupted.

At the very least, with a 7-series device, a user will be well-aware that their device has been physically compromised, which is a good start; and in a limiting sense, all you can ever hope for from a tamper-protection standpoint.

You can learn more about the Betrusted project at our github page, https://betrusted.io. We think of Betrusted as more of a “hardware/software distro”, rather than as a product per se. We expect that it will be forked to fit the various specific needs and user scenarios of our diverse digital ecosystem. Whether or not we make completed Betrusted reference devices for sale will depend upon the feedback of the community; we’ve received widely varying opinions on the real demand for a device like this.

Trusting Betrusted vs Using Betrusted

I personally regard Betrusted as more of an evolution toward — rather than an end to — the quest for verifiable, trustworthy hardware. I’ve struggled for years to distill the reasons why openness is insufficient to solve trust problems in hardware into a succinct set of principles. I’m also sure these principles will continue to evolve as we develop a better and more sophisticated understanding of the use cases, their threat models, and the tools available to address them.

My personal motivation for Betrusted was to have private conversations with my non-technical friends. So, another huge hurdle in all of this will of course be user acceptance: would you ever care enough to take the time to verify your hardware? Verifying hardware takes effort, iPhones are just so convenient, Apple has a pretty compelling privacy pitch…and “anyways, good people like me have nothing to hide…right?” Perhaps our quixotic attempt to build a truly verifiable, trustworthy communications device may be received by everyday users as nothing more than a quirky curio.

Even so, I hope that by at least starting the conversation about the problem and spelling it out in concrete terms, we’re laying the framework for others to move the goal posts toward a safer, more private, and more trustworthy digital future.

The Betrusted team would like to extend a special thanks to the NLnet foundation for sponsoring our efforts.

31 Dec 01:12

Twitter Favorites: [skinnylatte] 12 years ago I moved to Dubai fresh out of college and attempted to make myself a meal. It was so awful I couldn’t… https://t.co/M7fTUYjJUM

Adrianna Tan @skinnylatte
12 years ago I moved to Dubai fresh out of college and attempted to make myself a meal. It was so awful I couldn’t… twitter.com/i/web/status/1…
31 Dec 01:11

Runaway

Wendy M. Grossman, net.wars, Dec 27, 2019
Icon

This is a good article despite the garish formatting (if you use Firefox you'll want to use the Just Read extension for this one). The main point of the article is that "we don't really know which patterns machine learning algorithms identify as significant." That's because the patterns aren't expressed as rules, and differences that have no meaning to us - changing a few pixels, say - may have a significant impact on the outcome. So we have to ask not only "did the model learn its task well" but also "what else did it learn?"

Web: [Direct Link] [This Post]
31 Dec 01:11

Leading Pedestrian Intervals~If Surrey Can Do It, Why Can’t Vancouver?

by Sandy James Planner

9931-generic-tj-2011-038-e1509640462254-1024x338-1

9931-generic-tj-2011-038-e1509640462254-1024x338-1

I have been writing about Leading Pedestrian Intervals  (LPIs) and spoke on CBC Radio this month about why this innovation should be adopted everywhere.

For a nominal cost of $1,200 per intersection, crossing lights are reprogrammed to give pedestrians anywhere from a three to ten second start to cross the street before vehicular traffic is allowed to proceed through a crosswalk. There are over 2,238  of these leading pedestrian crossing intervals installed in New York City where their transportation policies prioritize the safety of walkers over vehicular movement. New York City had a 56 percent decrease in pedestrian and cyclist collisions at locations where LPIs were installed. NACTO, the National Organization of City and Transportation Officials estimates that LPIs can reduce pedestrian crashes by 60 percent.

Since 75 percent of Vancouver’s pedestrian crashes happen in intersections, and since most of the fatal pedestrian crashes involve seniors, it just makes sense to implement this simple change to stop injuries and to save lives.

There has not been much political will in the City of Vancouver to adopt Leading Pedestrian Intervals, and there are only a  handful in the city. Kudos to the City of Surrey’s Road Safety Manager Shabnem Afzal who has tirelessly led a Vision Zero Plan (no deaths on the roads) and has been behind the installation of Leading Pedestrian Intervals at over seventy Surrey intersections.

As reported by CBC’s Jesse Johnston, Leading Pedestrian Intervals  “allows pedestrians to establish their right of way in the crosswalk.”

Quoting Ms. Afzal, “”It puts pedestrians into the crosswalk far enough to make them more visible to drivers. We normally implement them around T-intersections where there may be a potential for conflict between a vehicle and a pedestrian…It is a no-brainer really that we have to try and protect those most vulnerable road users. Especially given that it’s low cost and we can implement LPIs anywhere where there’s actually a signal.

Kudos to Surrey and to Road Safety Manager Ms. Afzal for getting this done.

When can we expect the same kind of response  from the City of Vancouver?

Here’s a YouTube  explanation from New York City’s Department of Transportation  of how the Leading Pedestrian Interval works.

Image: City of Toronto

31 Dec 01:10

Predictions Review: Trump, Zuck Crush My Optimism In 2019

by John Battelle

This past year, I predicted the fall of both Zuck and Trump, not to mention the triumph of cannabis and rationale markets. But in 2019, the sociopaths won – bigly.

Damn, was I wrong.

One year ago this week, I sat down to write my annual list of ten or so predictions for the coming twelve months. And before I was even halfway through, I’d already listed and then summarily dismissed the two most significant American sociopaths of our generation.

Despite my glancing protestations (#2 and #4, below), Mark Zuckerberg and Donald Trump did not go gently into the good night of 2019. And believing they might have only proves both my naiveté and our collective challenge: If we truly want a better world, we need to reform not just the technology industry, but the steroid-fueled version of capitalism that has captured it. If I’ve learned anything from this annual process of critically reviewing my predictions, it’s this: the fusion of unrestrained capitalism with unaccountable technology has become the playground of sociopaths. And this past year, the best sociopaths won. Bigly.

And while I’m tempted to pen a rant pointing out the eerie similarities between Zuck and Trump’s character, ascendance, and current chokehold on power, I’ll leave that for another day (though as a teaser, you really should watch this clip, especially the last few seconds…). Over the past 16 years, this post has evolved into a rather light-hearted scorecard, after all. Forgive me if I’m in a grimmer mood as we get started. But I did pick a doozy for my first prediction last year:

1/ Global warming gets really, really, really real. Honestly, I don’t know how anyone could argue 2019 was exactly the year things got way, way too real. Given my American bias and unforgiveable (if twisted) optimism, I predicted we’d have some kind of a Hurricane Sandy like event that slapped some sense into the United States. While that didn’t exactly happen (we got lucky with Dorian and others, though the Bahamas certainly didn’t), there were so many terrifying climate-related news events in 2019, it’s impossible to imagine 2019 as anything other than a turning point in the climate change narrative. First off, we had the single largest set of mass protests on any issue, ever – and of course, Greta Thurnberg as Time’s person of the year (which of course our president mercilessly and predictably mocked). We had news that the Arctic’s permafrost is melting, releasing a vicious cycle of carbon into the atmosphere. Bloomberg counted up our climate disasters in 2019, and found we had at least one every week. We had more devastating fires in California, we had a heat wave in Greenland (and Europe), we had massive waterfalls of melting ice, we had scientists freaking out that their most dire predictions are now looking too conservative. Nearly 10 million people were displaced by climate change in 2019. A huge swath of the Amazon was on fire this past year – spewing yet another continuous torrent of carbon. So yeah, the US was comparatively spared, but damn, things got really, really real this past year. I’m not happy about it, but I think I got this one at least partially right.

2/ Mark Zuckerberg resigns as Chairman of Facebook, and relinquishes his supermajority voting rights. Related, Sheryl Sandberg stays right where she is. Ok, this was one of several predictions where I was really hoping to be right, but as I copped in the introduction, I simply should have known better. 2019 was certainly a year where plenty of tech lords were taken down a notch (see #8 below), but not at Facebook, which saw its stock rally to near record highs. Scandal, fraud, whistling past democracy’s graveyard – none of it mattered in 2019. And no way will a founding CEO get taken down a notch in that scenario, ridiculous governance structures be dammed. Man, did I whiff!

3/ Despite a ton of noise and smoke from DC, no significant federal legislation is signed around how data is managed in the United States. This played out exactly as I predicted. And to be honest, I don’t expect much to come in 2020, either, despite the fulminations of legislators across both parties. Why? See #2, and for that matter, this next doozy…

4/ The Trump show gets cancelled. Nope. Just like Facebook, Trump’s stock is near an all time high – his approval ratings actually increased during the impeachment hearings. This despite the fact that 55% of the American public now wants him out of office. So yes, Trump will still be in power come New Year’s, and that means I was hopelessly wrong. I suppose I could claim some kind of win given the House did cancel his loathsome reality show, but it takes two chambers of Congress to remove a president. Just like Zuck, I’m left realizing that if I want to be more accurate in my predictions, I should stop wishing for things that make sense, but would cost kingmakers either their money or their power. Another whiff.

5/ Cannabis for the win. Yikes. What kind of idiot predicts the federal legalization of cannabis in a world controlled by Trump? This looked promising at mid year, with a number of legislators holding “historic” hearings on the subject. The issue could have gained traction from there, and we might have had a bipartisan bill by the end of the year, had Trump not needed to play to his base as impeachment seized the narrative. So alas, it was not to be. Despite huge support from the American public, Republicans in Congress managed to actually set the movement back, killing common sense legislation that would have unshackled entrepreneurs who are attempting to create a safe and stable industry (caveat: I’m invested in many of them). The fact is, this past year the black market for cannabis kicked the legal market’s ass. Another whiff, and not the kind any of us would enjoy.

6/ China implodes, the world wobbles. Ah, well, this almost happened. All year long, the headlines augured the collapse of China’s potemkin economy, as Trump’s trade war seemed poised to tilt the globe into recession. Here are a few: Beware of Tremors in China’s Commercial Property Market; China’s Inward Tilt Could Cripple It; China’s Yuan Falls Past Key Level of 7 to the Dollar; on and on the headlines went, warning of a China implosion. But it was not to be. I was a year early and 10 trillion dollars short here. Whiff.

7/ 2019 will be a terrible year for financial markets. Lordy. Just. So. Wrong. Again, I bet against a president and a set of market makers utterly set on ensuring their own power. Damn Fool. Whifferoo.

8/ At least one major tech IPO is pulled, the rest disappoint as a class. If nothing else, here’s proof I should stick to my own lanes. Thanks WeWork, for pulling your IPO and proving that at least I’ve still got tech prediction chops. And yes, the rest of the class didn’t do so great either – Slack, Uber, Lyft have all disappointed. There were some bright spots – Pinterest, Zoom and Cloudflare among them. But it wasn’t the year the tech industry had hoped for, by a long shot.

9/ New forms of journalistic media flourish. This one was kind of a ringer – I knew we’d be launching The Recount by summer, and indeed we did. But it was also a proxy for what I hoped would be a resurgence in journalism across the board. And while I can’t prove this statistically, 2019 did feel like a year journalism got some of its mojo back. Non-profit models seemed to strengthen, subscription revenue continues to eclipse advertising at quality outlets like The New York Times, and innovative newsletters like The Hustle and The Skimm prospered. Maybe “flourish” was too optimistic (like most of my 2019 predictions), but at least this one wasn’t a total whiff.

10/A new “social network” emerges by the end of the year. Well, umm…does Tik Tok count?! Not really, at least, not if you read the fine print in my prediction, where I reasoned that private social chat would be the most likely place for new entrants to emerge. And it seems Zuck agreed – announcing in March a “pivot to privacy” focused on group chat that all but destroyed any investment in the space. Later in the year, Automattic, the relatively unknown company whose WordPress platform powers nearly a third of the Internet, bought Tumblr, a once-important gateway drug that later ceded primacy to Twitter and Instagram. The combination set tech hearts aflame with speculation that a Facebook competitor was in the works. But as far as I can tell, no such plans exist. So yeah, we did see important gains for private social chat this past year, but by year’s end, the Valley’s still stuck in Facebook’s grip, and everyone’s still debating if we’ll ever emerge from it. Me, I’m not so optimistic anymore.

And that, friends, caps what is likely the worst year of predictions I’ve ever reviewed. By my count I only got three of ten defensibly correct in 2019, with a couple pushes and five miserable whiffs. Not a good scorecard going into 2020, but hey, at least I learned something. In an era dominated by Trump and Zuck, it’s best to check your optimism before wading into prognostication. But hell, I’ve still got a few days before I plan on writing my predictions for 2020. Irrational optimism is a hard habit to quit. Maybe it’ll make a comeback next year….


Previous predictions:

Predictions 2019

Predictions 2018

2018: How I Did

Predictions 2017

2017: How I Did

Predictions 2016

2016: How I Did

Predictions 2015

2015: How I Did

Predictions 2014

2014: How I Did

Predictions 2013

2013: How I Did

Predictions 2012

2012: How I Did

Predictions 2011

2011: How I Did

Predictions 2010

2010: How I Did

2009 Predictions

2009 How I Did

2008 Predictions

2008 How I Did

2007 Predictions

2007 How I Did

2006 Predictions

2006 How I Did

2005 Predictions

2005 How I Did

2004 Predictions

2004 How I Did

31 Dec 01:09

The Museum of Norm

by peter@rukavina.net (Peter Rukavina)

The week that I was home in Ontario for my father’s death in November I packed up a box of things from his office and workshop that I wanted to remember him by. That office and that workshop were perhaps the truest manifestations of who Dad was: packed full of things that might one day be useful, extremely well organized, and kind of weird. I had no more emotional moment that week than when I entered his office for the first time after he died, seeing it exactly as he left it; it was sacred space, with his atoms still swirling around. I had to leave and come back the next day.

By the end of the week I’d filled the box I’d found in the basement (Dad was nothing if not a compulsive saver of cardboard boxes–a trait I have inherited). The box arrived in the mail today, courtesy of brother Mike’s postal prowess, and before its contents got scattered to the corners of my office, I took photos of everything, and thus am able to present here a Museum of Norm.

Door Sign

For years and years and years Dad had an index card taped to his office door at the Canada Centre for Inland Waters with a pointer that he could rotate to indicate where he was when he wasn’t in his office. It bears the marks of much use, and many edits, and it’s lost its pointer, but it survived:

Location sign from Dad's office door at CCIW

Here are the places Dad could be that weren’t his office (clockwise from top-left):

  • Working at Home (689-5218) – that was our home phone number in Carlisle; I still sometimes call it even though my parents moved. and their phone number changed.
  • Eng. Geol. or Srd. Lab
  • Elswhere in the Bldg.
  • Out
  • Printer or Xerox Room, R132/132A
  • Library
  • French Class/Seminar/Mtg.
  • Leave
  • Room R105
  • Computing Centre
  • Burlington, Hamilton, Mac – Mac was McMaster University
  • In

This is my most treasured artifact.

Business Cards

Dad spent his entire career working for the federal government at the Canada Centre for Inland Waters in Burlington, Ontario. Over the 30+ years he spent there as research scientist, his practice of nearshore sedimentology was organized under a variety of sections and divisions. Here are a selection of his business cards reflecting those changes:

Geolimnology Section, Lakes Division

Dad's early business card, Geolimnology Section, Lakes Division, CCIW

Hydraulics Division

Business card number 2, Hydraulics

Aquatic Ecosystem Management Research Branch

Business card number 3

Aquatic Ecosystem Restoration Branch

Business card number 4

Aquatic Remediation Technologies, New Technologies Research Branch

Business card number 5

Have Computer Will Travel

When Dad and I ran Cellar Door Software together in the 1980s, we made up some business cards on a dot matrix printer, under the tag line Have Computer Will Travel:

Have Computer Will Travel business card

Tools

I inherited my love of having the right tool at the right time for the right job from my father. Between his workshop and his office he had the right tool for a lot of jobs, and I used the opportunity to fill gaps in my own collection.

Battery Tester

From the number of batteries of all types – AA, AAA, watch batteries, and more – that Dad had in his office, it’s obvious that he found nothing more frustrating than not having the right battery at hand when needed. This tester allowed him to gauge whether any given battery was still worth keeping around:

Battery Tester

Paint Scraper

A well-equipped workshop has many scrapers, and Dad’s certainly did. I remember this one in particular, though, both for its compact design, and for using it in a summer-long effort to scrape the paint off my dresser:

Paint Scraper

Wire Stripper

I used to marvel at Dad’s ability to use this wire stripper tool to remove just the insulation from a piece of wire, leaving the conductor itself intact. Now I can do it myself!

Wire Stripper

Sliderule

If I’m to be perfectly honest, I have no idea whatsoever what a sliderule is for, or how it works. But this beautiful, compact one in Dad’s desk drawer needed saving; its twin is in the collection of the Computer History Museum.

Sliderule

Try Square

I don’t know why, but of all Dad’s tools I have the strongest attachment to this try square:

Try Square

Dividers

Dad was very much of the “pay more for a well-made tool and keep it for life” school, and these dividers are a good example of that. I love the tool; I also love his printing on the box.

Dividers

Magnifier

I just love everything about this tiny magnifier with its sliding metal cover.

Magnifier

Pencil Sharpener

Pencil sharpeners these days are flimsy; this one, a model KS from the Boston Pencil Sharpener Company, is built like a tank. I’m going to mount it near the letterpress, and heretofore it will sharpen all my pencils.

Pencil Sharpener

Rubber Gloves

A pair of heavy duty rubber gloves that are perfect for the wet work of letterpress. I also like the idea of holding hands with Dad.

Rubber Gloves

Snips

I’ve a feeling Dad might have inherited these snips from his own father: there are well-worn, but still very usable.

Snips

Rubber Stamps

My father and I shared a love of rubber stamps–I’ve got a whole barrel of them somewhere here in the office. These date from his days as a graduate student in Rochester, New York; the bottom one comes from C.H. Morse & Son.

Rubber Stamps

Supplies

Electrical Tape

It happens enough that it’s almost a law: when you need electrical tape you can’t find electrical tape. And so you use masking tape. Or cello tape. Or something else entirely inappropriate to the job. Now I have electrical tape.

Electrical Tape

String

Dad had an entire shoe box full of string in his basement workshop, and it was all I could do to prevent myself from taking it all. But space was a consideration, so I took only the two most interesting examples. I’ll use this for tying up metal type and for bookbinding.

String

Cords and Adapters

Are these a tool? A supply? Like me, Dad had boxes and boxes filled with power supplies, extension cords, and adapters. When will I ever need a stereo-mini-male to stereo-mini-male adapter? I don’t know. But if Dad felt he needed one at the ready, I should honour this.

Cords

Ephemera

63

When we moved from Burlington to Carlisle in 1972, the house my parents bought was at 63 Progreston Road; sometime thereafter the street got re-numbered, and what was № 63 became № 343. Dad saved the 63. I would do the same thing in his position.

63

Quik-Bands Tin

Second only to a love of tools, Dad and I also shared a love of things-that-hold-other-things (I have an entire shelf with that label, filled with envelopes and boxes and pouches). This is a lovely example of a thing-that-holds-other-things, a Quik-Bands bandage tin. It now holds the aforementioned rubber stamps.

Quik-Bands Tin

Joseph Brant Hospital Volunteer Badge

In his retirement, Dad volunteered every Friday at Joseph Brant Hospital in Burlington. His dedication to his post was such that the week he had a heart attack several years ago he phoned to apologize: “I’m afraid I won’t be able to make my shift on Friday, as I’ve had a heart attack.” This is one of his volunteer ID badges from that post.

Name Badge

CCGL Puffin Plaque

The RoxAnn surveys that Dad and his crew conducted in the Great Lakes were done from CCGL Puffin, a survey launch (CCGL stands for Canadian Coast Guard Launch).

Puffin Nameplate

While I was pretty certain the Puffin sank, as Dad’s 2008 email about the plaque was titled “The Puffin in happier days,” word is that she’s still on the water, acting as a sort of “tow truck for boats.” The Puffin’s sister, the Petrel, is still in Coast Guard service, and is their oldest launch.

Dad included a photo of the launch with that email:

Puffin

Plan Your Visit

Alas after I surveyed the collection and took photos, I dispersed the items making up The Museum of Norm into various spots in my office, ready for deployment when needed. So there’s no way to tour the collection as a whole other than virtually. But as I test batteries, tie strings, scrape paint, and wrap things in electrical tape in the weeks, months and years to come, the museum will live through me.

What are children if not museums of our parents.

31 Dec 01:08

Arctic ice melting

by Nathan Yau

One way to gauge the amount of ice in the Arctic is to look at the average age of the ice. From the NASA Scientific Visualization Studio, the map above shows the estimated age of ice on a monthly basis, going back to 1984:

One significant change in the Arctic region in recent years has been the rapid decline in perennial sea ice. Perennial sea ice, also known as multi-year ice, is the portion of the sea ice that survives the summer melt season. Perennial ice may have a life-span of nine years or more and represents the thickest component of the sea ice; perennial ice can grow up to four meters thick. By contrast, first year ice that grows during a single winter is generally at most two meters thick.

Tags: Arctic, climate, ice, NASA

31 Dec 01:07

On the importance of cultivating a network of friends and colleagues OUTSIDE of academia

by Raul Pacheco-Vega

I have had a very diverse life and with that, I have developed various networks of friends who do NOT work at all in academia. These friends and colleagues have enriched my life enormously.

Rebecca, John, Ryan, Tanya and myself

John, Rebecca, Ryan, Tanya and myself when I was in graduate school. All of them worked in the tech industry, while I was studying my PhD.

One of the reasons why I have never feared the tenure process outcomes is precisely the broad range of activities I have developed in my previous lives and the extensive and diverse network of networks I have. I am lucky to have friends and colleagues in many industries, and because of my interdisciplinary training and work experiences in government, industry, consulting, academia and business, I can do a lot more than being a professor.

I myself have done a number of different things. I have waited tables, served coffee, managed an office, danced and modeled professionally, designed marketing campaigns and blogger relations programmes. I have done public relations and media training work. I have advised startups. I was my parents’ assistants when they were lawyers (my Mom decided to leave law behind, did a PhD in Spain at the Universidad Complutense de Madrid and became a professor of political science).

Having friends of very diverse backgrounds and doing different things myself, (working as a chemical engineer in a plant, working in government as an advisor and within the bureaucracy, tutoring and teaching disadvantaged and marginalized adults some basic literacy skills) has expanded my world from the narrower academic field.

I can do A LOT MORE.

I have worked as a management consultant, as a website programmer, and I have a super strong network of friends in extremely diverse fields. I love being a professor, don’t get me wrong, and I’m very good at it, but I don’t necessarily need to stay in academia. I have plenty other options.

Having friends with diverse backgrounds and engaging in other professional activities not directly related to academia has made me a lot leas worried and wary about a turbulent job market. Because I can do a lot of things beyond being a professor. That’s greatly thanks to having friends who work outside of academia.

Moral of the story: cultivate a network of people who are WITHIN academia AND a network of people who are OUTSIDE academia.

Both are very important.

31 Dec 01:07

How We Run the NetNewsWire Open Source Project

People ask me, “So — I can just show up, see something I feel like doing, do it, and then it will just show in the app?”

The answer, perhaps surprisingly, is no. Or, mostly no.

Well, kind of yes. It’s complicated. I’ll explain.

We run the project as if it were a commercial app

There’s a misconception about open source apps: people often think they’re not really intentionally run — they just kind of accrete whatever features and fixes that people who show up feel like doing.

NetNewsWire is not willy-nilly like that. We plan features and fixes the same way we would if the app cost money. We have a help book, a website, a blog, a Twitter account. We work especially hard on design. In most respects it’s indistinguishable from a commercial app.

We even set dates internally — and then blow right through them. :)

I’ve become fond of saying that NetNewsWire is a team. It is! But it’s also true that the app has my name on it.

My leadership style is to ask questions, talk things over, look for consensus, and trust people — but the last call is mine. It’s not a democracy, and it’s certainly not a thing where people just show up and ship whatever they feel like.

We definitely do not run the project as if it were a commercial app

But remember that NetNewsWire is free, has no budget (not one cent), and everyone who works on it is volunteering their time.

So there are some things we do differently from a commercial app.

Quality

A big one is app quality: we have to do way better than commercial apps.

This may seem surprising, but consider: we don’t have a support team. We help people when they have questions, but it’s vitally important that we ship the highest quality app possible so that we don’t get bogged down doing support. (The Help book is a big part of that too! But we’d do the Help book regardless, because it’s a matter of respect for people who use the app.)

You may remember older versions of NetNewsWire with fondness, and you may be missing some features the older versions had — but make no mistake: NetNewsWire 5 is a much higher-quality app than any older NetNewsWires.

And — this is smaller, but real — we publish the source code. Anyone can read it, and we don’t want to be embarrassed by it. Even better: we hope that people can learn from it! I’d bet that the majority of for-pay Mac and iOS apps couldn’t survive this kind of scrutiny. (I don’t say that to be mean. They don’t have to, so they don’t.)

This may sound paradoxical, but it’s true: because NetNewsWire is free and open source, we have to have a higher bar of quality than commercial apps have.

People show up

I said at the top that people can’t just show up and work on whatever and then we ship it.

Except, well — sometimes people actually do show up and work on what they want to work on, and then we ship it.

The difference is planning and design. We talk things over on our Slack group, in the #work channel: is that feature right for the app? Will it perform well enough? Does it depend on something else being done first? What’s the design for the feature? Does it need to go into 5.1, or 6.0, or…? Is it a Mac-only thing, or iOS-only, or is it for both? Etc.

(We also use the GitHub bug tracker and create milestones for different releases.)

Consider the situation with syncing systems. I knew we wanted to ship NetNewsWire 5.0 for Mac with Feedbin syncing. We couldn’t ship with nothing — and Ben Ubois at Feedbin has been super-helpful, and Feedbin is awesome, and so it was a no-brainer.

And then consider that, after that, Feedly syncing was by far the most common request, so it was obvious to prioritize that one next. (The iOS TestFlight build includes Feedly syncing, and the Mac app will follow suit.)

And then consider that our goal is to support all the various systems. Which one will come next, after Feedly? How do we choose? This decision is based in part on people who show up: what systems do they want to work on?

This is not how you’d do things with a commercial app, but in this context it works fine.

By which I mean there absolutely is an element of going with the flow of who shows up and what they want to do. That’s actually part of the fun.

No Revenue

People have offered money — just in general or for a specific feature. But we won’t take any money at all. Money would ruin it.

When money’s involved, it becomes an issue, and in this world it’s the issue. We have our own little utopia where we can pretend like it doesn’t exist. (“How fortunate!” you say. Yes, indeed, and we don’t forget that fact for a second.)

This means, though, that our decisions can be entirely about what’s best for the app and the people who use it — we never, ever have to think about what’s best for revenue.

This is so nice!

Transparency all the way down

No part of NetNewsWire is behind closed doors. The code is available, the bug tracker is open, and anyone can join the Slack group. You can watch us — and help us! — make decisions.

Because of this, I can’t do that thing commercial apps do where they keep quiet about stuff and then do a big surprise release with a bunch of new features. Luckily, I’ve done that enough in life — I have no interest in doing that over and over.

Instead, we’re honest and open about our goals and what we’re doing and when we’re doing it. Nothing’s hidden.

Which makes me feel sometimes like I’m doing a high-wire act and everybody’s watching. The level of difficulty is certainly higher than with the average commercial app, since we can’t hide our code or our thinking, since everyone is a volunteer, since we have to do better than for-pay apps.

But I wouldn’t have it any other way. I have always loved making apps, but making this app with this team is the most fun I’ve ever had. By far. (And I’ve worked on some pretty great teams.)

And you can join us if you like! Everyone’s nice. 👍

PS My favorite part of the page where I announced the TestFlight beta is the Credits, near the bottom. You could be in there next time.

31 Dec 01:06

Twitter Favorites: [Lidsville] There are so many tips of the hat to Dune in Rise of Skywalker, I came home and remembered this article by… https://t.co/2xa7JRevln

Lindsay Brown @Lidsville
There are so many tips of the hat to Dune in Rise of Skywalker, I came home and remembered this article by… twitter.com/i/web/status/1…
31 Dec 00:59

Switching back to Windows

by Rob Campbell
It’s been a good run, but nearing the end of 2019, I’ve decided to make the switch back to Windows after nearly two decades of being primarily a Mac user. Don’t get me wrong, since 2002 when I got my first IBM PowerPC-based Mac Pro, I’ve always had a PC running alongside it. At the […]
31 Dec 00:58

A few thoughts on Sonos intentionally bricking their devices

by Volker Weber

Sonos offers a trade-up program where you get a discount on new hardware for returning old hardware. Actually, you don't return it but you decommission the old hardware. Sonos calls it "recycle mode". It's a 21 day countdown at the end of which the Sonos player no longer works. These are the facts:

  • Sonos does not brick your device. You brick your device and in return you get a discount on new hardware.
  • You don't have to brick your device. You can in fact use it as long as you please. At some point in the future it will no longer get updates, and further into the future it becomes useless without these updates.

There is a reason behind all of this and it's technical. All Sonos players run the same software. When you update them, one player downloads the new software and then deploys it on all players in your household. Newer devices are more powerful than older devices. That is why not all of them support Apple AirPlay 2. And this is holding the platform back.

In its first ten years of existence Sonos did not have to worry about their legacy. Now they do and they have chosen to abandon their oldest products. In an effort to not piss off their customers, they are buying back their oldest devices.

Update: I think this is important to consider. The devices that Sonos is trying to get off the market are those who have ancient networking hardware designed to run 10 MBit SonosNet. They make a more recent Sonos setup unreliable since they can't really keep up. I have heard multiple reports of how everything got better as soon as people started taking their ZP90s and ZP100/120s out of their setup. The S5/Play:5 gen1 is the last player with that configuration. I have come to the conclusion this has more weight than CPU/memory considerations.

My opinion is that "recycling mode" was a terrible idea. Sonos should have gone to the expense to actually collect the old hardware and then recycle it themselves.

What does that all mean for you as a customer? Read the writing on the wall. Sonos players don't last forever. They are designed to be supported for at least ten years after they were announced. Do not plan for a longer usage.

Sonos players that support AirPlay 2 should have a long life ahead. If you want my advice, sell those without AirPlay 2 before Sonos offers to buy them back.

31 Dec 00:58

Imbecile Of The Decade

by noreply@blogger.com (BOB HOFFMAN)

We have double torture this year. Not only are we ending a year, we're also ending a decade (pettifogging killjoys will take great pains to instruct us that the decade doesn't technically end until next year. I don't care. I'm celebrating this year. Next year we could all be dead. In fact, at the rate we're going, we probably will be.)

The torturing part is the dumb lists. It's bad enough when we have to endure The 10 Best Everything of the Year, but this year we have also to endure the The 10 Best Everything of the Decade.

For some reason that I can't fathom the advertising and marketing industries seem to be particularly fond of these idiotic lists. I guess it has something to do with click bait. The trade rag publishers have probably discovered that everyone wants to know who they have dubbed The Best Influencer Marketing Follower Fraud Celebrity of the Year or something.

Not being one to miss out on an opportunity to be stupid, I have a list of categories that I don't think have been covered. So here we go... The 10 Best Remaining Marketing Subjects for 10 Best Lists of the Decade.
  • The 10 Most Frightening Retweets of the Year
  • Best Use of Metaphor in an Email Subject Line of the Year
  • 10 Most Incomprehensible Unsubscribe Pages of the Year
  • Best Insincere Facebook Birthday Wish of the Year
  • The 10 Most Fascinating Articles About CMOs of the Decade
  • 10 Most Memorable Privacy Policy Updates of the Year
  • 5 Most Poetic LinkedIn Articles about Blockchain of the Decade
  • 10 Best Mutilations of the Name "Zuckerberg" of the Year
  • 10 Most-Unemployable-People-Who-Started-a-Podcast-this-Decade of the Year
  • 10 Best Years of the Decade
And, as always, the most heartfelt use of the term...Happy New Year!

31 Dec 00:58

On the importance of rest and recuperation (R&R) over the holidays

by Raul Pacheco-Vega

I won’t tell anyone what to do, but as I close out a terrible year, health-wise, I want to share a reflection regarding MY OWN EXPERIENCE with overwork. I think everyone can do whatever they prefer, I’m just using my experience to reflect on the profound inequalities and inequities of the higher education system and academia in general as it stands now (much as it has improved over the last few years).

Hotel San Trópico (Marina Vallarta, Puerto Vallarta, México)

Photo of me on holidays in Puerto Vallarta, Mexico, a couple of Decembers ago

My reflective Twitter thread started like this:

I have A LOT of experience with overwork. I was brought up as an overachiever. My life as a child was pretty regimented and (because 2 of my grandparents were in the military), quasi-militarized. I do not regret my upbringing and I’m grateful to my parents that they did this. I grew up expecting to balance and juggle a full-time school load, piano lessons, swimming lessons, and volunteering teaching adults how to read and write in gang-riddled neighbourhoods. I switched piano for theatre and competitive dancing (note I didn’t just dance, I COMPETED).

I switched from swimming and basketball to volleyball. I THRIVED while playing volleyball, and trained 4 hours every single day. I reached juniors national-level competitive team-status and travelled the country and abroad to play tournaments. All of this, while balancing school.

To me, my friends and my social life were irreplaceable, so I balanced competitive volleyball, competitive dancing, volunteering, a full-time school load (chemical engineering, which isn’t an “easy’ undergraduate degree) with having a social life, friends and a close-knit family. I have plenty of experience with big workloads and the challenges of juggling activities trying to keep a semblance of balance. I had tough and rigorous professors, and I do not regret having faced these challenges at all whatsoever.

HOWEVER…

When I entered grad school, more specifically my PhD, I felt that trying to manage the workload was like drinking water through a straw that was coming from a firehose. I am 5’11 and often felt that my workload was like 7 feet tall.

I frequently felt like I was drowning. LITERALLY

Note, I DO have special skillS. I speed-read, I touch-type over 100 words per minute, and I have quasi-eidetic memory. To me, preparing for comprehensive exams was a total breeze, and when I defended my doctoral dissertation, I basically hit the ball out of the park.

Dr. Raul Pacheco-Vega at CIGA-UNAM (Morelia)

Me giving a talk at UNAM in 2014. This is one of the things I love doing the most.

But even with those skills, I STRUGGLED. When I transitioned to being a professor, I reflected on the fact that even with my skills and extended experience being a systematic planner, I WAS STRUGGLING, I thought to myself: “what happens with everyone else who doesn’t have the privileges that I do? How much do THEY struggle?”

Being in a highly competitive environment drives up self-imposed excessive workloads. In graduate school I started getting tired regularly (despite playing competitive volleyball on a regular basis too), and this has followed me through my professor career.

THIS WAS NOT, AND IS NOT NORMAL.

I’m ok with being competitive, working hard, but I also like playing hard and more than anything, RESTING HARD. I regularly face this challenge of having a fulfilling academic career all the while trying to achieve some semblance of balance. I’ve written about this since 2013 (you can check my posts below).

2018 brought a really bad chronic pain episode, and 2019 started with a similar case. My first 2019 pain-free-day was February 15th, 2019. Over the second semester of 2019, I developed a terrible case of psoriasis/eczema/dermatitis combined with chronic fatigue/chronic pain. I am grateful that for the most part, I lived in Paris with relatively low-levels of pain, or pain-free (for the last few months of my visiting professorship at Sorbonne Nouvelle’s Institute D’Etudes D’Amerique Latine”). Because at least, I got extended periods of time to THINK.

I know for a fact from my experience this year that chronic pain, chronic fatigue and dermatitis all have impeded my scholarly performance. What I could accomplish with my full capacities, I did from February 15, 2019 to September 15, 2019 (which is when my dishydrotic eczema manifested itself alongside the chronic fatigue).

I understand that many of you may need SOME time to catch up with the accumulated workload you have. I had to do it too. FINE. I still suggest that you ought to take at least a few days off, in the way “off” is important to you (I can’t stop reading scholarly literature, so “time off” = “reading a nerdy book at a leisurely pace”). In closing:

As I was writing my thread I came across an important addendum: I am well aware of the fact that contingent faculty may be forced to overwork precisely because of the very nature of their labour precariousness. This is why academics’ wellness should be also higher education organizations’ responsibility. It’s a structural issue. We can’t let academic institutions off the hook without taking responsibility of the well-being of staff, faculty and students.

31 Dec 00:57

Twitter Favorites: [Planta] Whenever I see there's a new list of Order of Canada recipients, I go through it and count how many i've interviewed.

Joseph Planta @Planta
Whenever I see there's a new list of Order of Canada recipients, I go through it and count how many i've interviewed.
31 Dec 00:56

On “impostor syndrome” and “FOBMO” (Fear Of Being Missed Out) – the “Publish A LOT” strategy

by Raul Pacheco-Vega

Even though English is my first language (contrary to what many people may think because of my name and last names) and I was trained in English-language institutions (The University of British Columbia in Vancouver, Canada, and University of Manchester, in Manchester, England), I have published A TON of Spanish-language journal articles and book chapters. Despite my interest in dialoguing with the global scholarly literature, I always felt that I would be at some point working in Mexico and that I needed to publish in Spanish in order to talk to my target audience.

My publications folder

OH, SO FOOLISH OF ME.

I thought I’d get A TON of citations by publishing in Spanish. I didn’t. I haven’t. Despite the massive volume of Spanish-language publications I have, I am not as cited in this language as I am in English.

BUT…

For a while there, I didn’t have (in my view) enough publications in this language to show the complexities, nuanced shades, importance and relevance of my work. I felt an enormous sense of FOBMO (Fear Of Being Missed Out). Perhaps it was insecurity, perhaps it was FOBMO, I can’t quite pinpoint how I felt. I tried to articulate my feelings on this Twitter thread.

Again, I used this Twitter thread to reflect on my publication strategy. I can’t say if it is good or bad, but I think that there is value in not letting your own insecurities be an obstacle to your intellectual development. Perhaps I could have used a different publication strategy, I don’t know. But I do know that I sometimes have felt FOBMO. Despite the fact that I have a pretty decent publication record.

google scholar rpv

I strongly believe that losing my FOBMO is partly because I now have a much larger, stronger and robust publication record that shows my intellectual development. Again, this entry is NOT a “this is a strategy I recommend” suggestion but more like a “I wouldn’t recommend this strategy” blog post.

31 Dec 00:55

Twitter Favorites: [SheldonGLee1] @JuddBrackett_1 I said the exact same thing. “How Eriksson!”

Sheldon Lee @SheldonGLee1
@JuddBrackett_1 I said the exact same thing. “How Eriksson!”
31 Dec 00:50

Noticing some very odd behaviour on my WordPres...

by Ton Zijlstra

Noticing some very odd behaviour on my WordPress blog. I use the Mastodon Autopost plugin, and I use the Category Excluder plugin. I have one category that only publishes to RSS. To post something to Mastodon I must check a box in my edit screen. Yet if I publish something to the RSS only category, it also ends up on Mastodon, despite the checkbox not being selected. (I’m marking this posting both for that RSS only category and Mastodon Autopost. Would it result in 1 or 2 Mastodon updates, or none?)



This is a RSS only posting for regular readers. Not secret, just unlisted. Comments / webmention / pingback all ok.
Read more about RSS Club
31 Dec 00:39

Leading Pedestrian Intervals~If Surrey Can Do It, Why Can’t Vancouver?

by Sandy James Planner
mkalus shared this story from Price Tags:
As much as Vision has done for cyclists, they def. have failed pedestrians and it doesn’t look like this is going to change. Oh and while I am complaining: Hey Province, how about Strunk in go “right turn in red” from the MVA?

9931-generic-tj-2011-038-e1509640462254-1024x338-19931-generic-tj-2011-038-e1509640462254-1024x338-1

I have been writing about Leading Pedestrian Intervals  (LPIs) and spoke on CBC Radio this month about why this innovation should be adopted everywhere.

For a nominal cost of $1,200 per intersection, crossing lights are reprogrammed to give pedestrians anywhere from a three to ten second start to cross the street before vehicular traffic is allowed to proceed through a crosswalk. There are over 2,238  of these leading pedestrian crossing intervals installed in New York City where their transportation policies prioritize the safety of walkers over vehicular movement. New York City had a 56 percent decrease in pedestrian and cyclist collisions at locations where LPIs were installed. NACTO, the National Organization of City and Transportation Officials estimates that LPIs can reduce pedestrian crashes by 60 percent.

Since 75 percent of Vancouver’s pedestrian crashes happen in intersections, and since most of the fatal pedestrian crashes involve seniors, it just makes sense to implement this simple change to stop injuries and to save lives.

There has not been much political will in the City of Vancouver to adopt Leading Pedestrian Intervals, and there are only a  handful in the city. Kudos to the City of Surrey’s Road Safety Manager Shabnem Afzal who has tirelessly led a Vision Zero Plan (no deaths on the roads) and has been behind the installation of Leading Pedestrian Intervals at over seventy Surrey intersections.

As reported by CBC’s Jesse Johnston, Leading Pedestrian Intervals  “allows pedestrians to establish their right of way in the crosswalk.”

Quoting Ms. Afzal, “”It puts pedestrians into the crosswalk far enough to make them more visible to drivers. We normally implement them around T-intersections where there may be a potential for conflict between a vehicle and a pedestrian…It is a no-brainer really that we have to try and protect those most vulnerable road users. Especially given that it’s low cost and we can implement LPIs anywhere where there’s actually a signal.

Kudos to Surrey and to Road Safety Manager Ms. Afzal for getting this done.

When can we expect the same kind of response  from the City of Vancouver?

Here’s a YouTube  explanation from New York City’s Department of Transportation  of how the Leading Pedestrian Interval works.

Image: City of Toronto

31 Dec 00:37

‘The Mandalorian’ Season 2 confirmed for fall 2020 release on Disney+

by Bradly Shankar
The Mandalorian Baby Yoda

The Mandalorian creator Jon Favreau has confirmed that the second season of the live-action Star Wars series will hit Disney+ in fall 2020.

A more specific launch window was not mentioned, but fans can at least be rest assured that they’ll get new episodes before this time next year.

Favreau’s announcement comes hot on the heels of the release of the season finale, which premiered on Disney+ on December 27th. While he didn’t confirm any details about the show’s sophomore season, the season finale certainly leaves a number of threads dangling.

Since Disney+ launched in mid-November, The Mandalorian has quickly become the most notable title in the streaming service’s lineup of original content. In addition to being the first-ever live-action Star Wars series (with a Game of Thrones-level budget, no less), The Mandalorian also won over the hearts of many thanks to ‘Baby Yoda,’ a younger version of the mysterious alien race to which iconic Star Wars character Yoda belongs.

Taking place five years after The Return of the Jedi, the series follows a lone Mandalorian bounty hunter as he tries to protect the little green creature from a galaxy full of scum and villainy.

The Mandalorian stars Pedro Pascal (Narcos), Gina Carano (Deadpool), Nick Nolte (Warrior) and Carl Weathers (Rocky franchise), Taika Waititi (Jojo Rabbit) and Giancarlo Esposito (Breaking Bad).

The second season of The Mandalorian isn’t the only original Star Wars content coming to Disney+ in 2020. In February, the seventh and final season of The Clone Wars animated series — created by The Mandalorian director Dave Filoni — will hit the service. At a yet-to-be-revealed date in 2020, Star Wars: Jedi Temple Challenges — a children’s game show hosted by Jar Jar Binks actor Ahmed Best — will also begin streaming.

Beyond that, two undated live-action series are also in the works — one featuring Diego Luna’s Cassian Andor from Rogue One and another starring Ewan McGregor’s Obi-Wan Kenobi.

A Disney+ subscription costs $8.99 CAD/month or $89.99/year in Canada.

Image credit: Lucasfilm

The post ‘The Mandalorian’ Season 2 confirmed for fall 2020 release on Disney+ appeared first on MobileSyrup.

31 Dec 00:37

Taxi group sues Toronto for $1.7 billion due to hardship caused by Uber, case tossed by judge

by Dean Daley

A $1.7 billion proposed class-action lawsuit from three different taxi licensees against the City of Toronto was tossed out by a judge this month, according to the CBC.

The three taxi licensees Lawrence Eisenberg, Behrouz Khamza, and Sukhvir Thethi, were suing the city due to the losses they’ve suffered in the wake of Uber in Toronto.

Eisenberg told CBC that at one point his three licences, also called taxi plates, were worth $380,000 and each month they’d bring in $4,500. However, at this point, they’re only worth $10,000 and bring in $200 per month.

While Eisenberg believes its the city’s responsibility to protect taxi licensees from financial instabilities, Justice Paull Perell of the Ontario Superior Court of Justice thinks otherwise. “Neither the City of Toronto Act nor the Toronto Municipal Code require the City to protect the interests of taxicab owners,” said Justice Perell.

Eisenberg and the two other plaintiffs have 30 days to appeal the court’s decision.

Source: CBC

The post Taxi group sues Toronto for $1.7 billion due to hardship caused by Uber, case tossed by judge appeared first on MobileSyrup.

27 Dec 06:22

An Update on the First Use of the Term "Programming Language"

by Eugene Wallingford

This tweet and this blog entry on the first use of the term "programming language" evoked responses from readers with some new history and some prior occurrences.

Doug Moen pointed me to the 1956 Fortran manual from IBM, Chapter 2 of which opens with:

Any programming language must provide for expressing numerical constants and variable quantities.

I was aware of the Fortran manual, which I link to in the notes for my compiler course, and its use of the term. But I had been linking to a document dated October 1957, and the file at fortran.com is dated October 15, 1956. That beats the January 1957 Newell and Shaw paper by a few months.

As Moen said in his email, "there must be earlier references, but it's hard to find original documents that are early enough."

The oldest candidate I have seen comes from @matt_dz. His tweet links to this 1976 Stanford tech report, "The Early Development of Programming Languages", co-authored by Knuth. On Page 26, it refers to work done by Arthur W. Burks in 1950:

In 1950, Burks sketched a so-called "Intermediate Programming Language" which was to be the step one notch above the Internal Program Language.

Unfortunately, though, this report's only passage from Burke refers to the new language as "Intermediate PL", which obscures the meaning of the 'P'. Furthermore, the title of Burke's paper uses "program" in the language's name:

Arthur W. Burks, "An intermediate program language as an aid in program synthesis", Engineering Research Institute, Report for Burroughs Adding Machine Company (Ann Arbor, Mich.: Univ. of Michigan, 1951), ii+15 pp.

The use of "program language" in this title is consistent with the terminology in Burks's previous work on an "Internal Program Language", to which Knuth et al. also refer.

Following up on the Stanford tech report, Douglas Moen found the book Mathematical Methods in Program Development, edited by Manfred Broy and Birgit Schieder. It includes a paper that attempts "to identify the first 'programming language', and the first use of that term". Here's a passage from Page 219, via Google Books:

There is not yet an indication of the term 'programming languages'. But around 1950 the term 'program' comes into wide use: 'The logic of programming electronic digital computers' (A. W. Burks 1950), 'Programme organization and initial orders for the EDSAC' (D. J. Wheeler 1950), 'A program composition technique' (H. B. Curry 1950), 'La conception du programme' (Corrado Bohm 1952), and finally 'Logical or non-mathematical programmes' (C. S. Strachey 1952).

And then, on Page 224, it comments specifically on Burks's work:

A. W. Burks ('An intermediate program language as an aid in program synthesis', 1951) was among the first to use the term program(ming) language.

The parenthetical in that phrase -- "the first to use the term program(ming) language" -- leads me to wonder if Burks may use "program language" rather than "programming language" in his 1951 paper.

Is it possible that Knuth et al. retrofitted the use of "programming language" onto Burks's language? Their report documents the early development of PL ideas, not the history of the term itself. The authors may have used a term that was in common parlance by 1976 even if Burks had not. I'd really like to find an original copy of Burks's 1951 ERI report to see if he ever uses "programming language" when talking about his Intermediate PL. Maybe after the holiday break...

In any case, the use of program language by Burks and others circa 1950 seems to be the bridge between use of the terms "program" and "language" independently and the use of "programming language" that soon became standard. If Burke and his group never used the new term for its intermediate PL, it's likely that someone else did between 1951 and release of the 1956 Fortran manual.

There is so much to learn. I'm glad that Crista Lopes tweeted her question on Sunday and that so many others have contributed to the answer!

27 Dec 06:22

English isn't generic for language, despite what NLP papers might lead you to believe

Emily M. Bender, Symposium on Data Science & Statistics, Dec 26, 2019
Icon

This is from last March, but I found it today, and the title alone is worth passing along this set of slides. "Natural language isn’t just English, and NLP work should stop pretending that it is. If you’re a consumer of NLP tech (e.g. for text as data research), demand better." See also: Wenyan, "an esoteric programming language that closely follows the grammar and tone of classical Chinese literature. Moreover, the alphabet of wenyan contains only traditional Chinese characters and 「」 quotes, so it is guaranteed to be readable by ancient Chinese people."

Web: [Direct Link] [This Post]
27 Dec 06:21

Educational visions

Rebecca Ferguson, Ann Jones, Eileen Scanlon, Ubiquity Press, Dec 26, 2019
Icon

I spent the better part of Boxing Day afternoon reading and mostly enjoying this book (186 page PDF). It is based on the work over the last 40 years of the Computers and Learning Research Group  (CALRG) at the U.K.'s Open University. The point of departure is CALRG's "Beyond Prototypes" which is used to explain "why educational technology initiatives worldwide succeed and why they often fail." This then informs  four major areas of inquiry: teaching and learning at scale, accessible inclusive learning, evidence-based learning, and STEM learning. Each is given a historical perspective, then in a separate chapter a look forward. In a commentary on the book Martin Weller describes it as a "good example" of an alternative to the "wilful historical amnesia in much of ed tech." Maybe so. But let's not forget that this is a book specifically about the Open University, and that while nobody doubts the OU's importance to the field, nobody would say that it alone defines its history, despite the often subtle ways the book says just that. Still. It's a good read, well worth the time.

Web: [Direct Link] [This Post]
27 Dec 06:21

A Clear Case for Resisting Student Tracking

by jennydavis

Drew Harwell (@DrewHarwell) wrote a balanced article in the Washington Post about the ways universities are using wifi, bluetooth, and mobile phones to enact systematic monitoring of student populations. The article offers multiple perspectives that variously support and critique the technologies at play and their institutional implementation. I’m here to lay out in clear terms why these systems should be categorically resisted.

The article focuses on the SpotterEDU app which advertises itself as an “automated attendance monitoring and early alerting platform.” The idea is that students download the app and then universities can easily keep track of who’s coming to class and also, identify students who may be in, or on the brink of, crisis (e.g., a student only leaves her room to eat and therefore may be experiencing mental health issues). As university faculty, I would find these data useful. They are not worth the social costs.

One social consequence of SpotterEDU and similar tracking applications is that these technologies normalize surveillance and degrade autonomy. This is especially troublesome among a population of emerging adults. For many traditionally aged students (18-24), university is a time of developmental transition—like adulting with a safety net. There is a fine line between mechanisms of support and mechanisms of control. These tracking technologies veer towards the latter, portending a very near future in which extrinsic accountability displaces intrinsic motivation and data extraction looms inevitable.

Speaking of data extraction, these tracking technologies run on data. Data is a valuable resource. Historically, valuable resources are exploited to the benefit of those in power and the detriment of those in positions of disadvantage. This pattern of reinforced and amplified inequality via data economies has already played out in public view (see: targeted political advertising, racist parole decisions, sexist hiring algorithms). One can imagine numerous ways in which student tracking will disproportionately affect disadvantaged groups. To name a few: students on financial aid may have their funding predicated on behavioral metrics such as class attendance or library time; “normal” behaviors will be defined by averages, which implicitly creates standards that reflect the demographic majority (e.g., white, upper-middle class) and flags demographic minorities as abnormal (and thus in need of deeper monitoring or intervention); students who work full-time may be penalized for attending class less regularly or studying from remote locations. The point is that data systems come from society and society is unequal. Overlaying data systems onto social systems wraps inequality in a veneer of objectivity and intensifies its effects.

Finally, tracking systems will not be constrained to students. It will almost certainly spread to faculty. Universities are under heavy pressure to demonstrate value for money. They are funded by governments, donors, and tuition-paying students and their families. It is not at all a stretch to say that faculty will be held to account for face time with students, time spent in offices, duration of classes, and engagement with the university. This kind of monitoring erodes the richness of the academic profession with profound effects on the nature of work for tenure-line faculty and the security of work for contingent lecturers (who make up an increasing majority of the academic workforce).

To end on a hopeful note, SpotterEDU and other tracking applications are embedded in spaces disposed to collective action. Students have always been leaders of social change and drivers of resistance. Faculty have an abundance of cultural capital to expend on such endeavors. These technologies affect everyone on campus. Tenure-line faculty, contingent faculty, and students each have something to lose and thus a shared interest and common struggle[1]. We are all in the mess together and together, we can resist our way out.  

Jenny Davis is in Twitter @Jenny_L_Davis

Headline pic via: Source


[1] I thank James Chouinard (@jamesbc81) for highlighting this point

27 Dec 06:21

RT @redalphababe: This map is a reminder as to why there was a determination to create a European Project at the centre of which was Peace…

by redalphababe
mkalus shared this story from ottocrat on Twitter.

This map is a reminder as to why there was a determination to create a European Project at the centre of which was Peace and ending conflict and bloodshed. twitter.com/franakviacorka…

Population losses during World War II pic.twitter.com/SGlPK59drg





3610 likes, 1440 retweets

Retweeted by ottocrat on Thursday, December 26th, 2019 5:13pm


569 likes, 309 retweets
27 Dec 06:20

2019 Year in Review: Operations

by Purism

2019 was a year of many changes for the operations team. In February we moved to a bigger location in Carlsbad to accommodate the growing amount of products and orders. All existing stock had to be transferred which caused some hick-ups during the transition phase, but fortunately recovery was quick.

We doubled our staff for assembly, customer care and product assembly. We also improved and streamlined our testing workflow. As a result we were able to reduce the time from order to shipping by 50%. We are now in a much better position to handle shipping special requests, like shipping at a custom date or time critical shipments.

In June we started assembly of the new Librem Key “Made in USA” completely from our own facility. In September we introduce laptops with pre-installed Pure Boot i.e. each laptop receives an individualized and secured operating system installation during assembly and testing, which got further enhanced in December by an optional anti-interdiction service.

2020 will have a busy start with bulk orders of various Librem 5 components arriving (batteries, chargers etc) and assembly of the larger Librem 5 batches beginning. We also need to process all the Holiday Sale orders and the first Librem Server orders.

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the-people stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Preorder now

The post 2019 Year in Review: Operations appeared first on Purism.

27 Dec 06:20

Join us on our new journey

mkalus shared this story .

As a team of over 26 nationalities, we at Wunderlist/Microsoft To Do know what it's like to move your life from one place to another. Be it a city or an app, moving can be stressful and scary – but, more importantly, exciting and refreshing. The key to a successful move is having the support to make your transition as seamless as possible, so you can focus on enjoying all the new experiences and creating new memories (or to-dos).

It's time for a new beginning

We’ve been working tirelessly to ensure our new app, Microsoft To Do, feels like a new home for your lists. We want you to be able to start planning your day and checking off those to-dos as soon as you hit that import button. Your favorite features are all in To Do – features like list groups (folders), steps (subtasks), file attachments, and sharing and task assignments. We held ourselves to a high design standard on Wunderlist, and To Do is no different. In fact, we have even more background options in To Do, so now you can color code each list to keep work separate from home.

This brings us to our important news.

When we first announced Microsoft To Do, we also announced that Wunderlist would eventually retire. We planned this so we could concentrate on building a more integrated and secure app that helps you get stuff done in a smarter way.

It's time to let you know that on May 6th, 2020, we plan to shut down Wunderlist.

Why are we doing this now? We’ve stopped releasing new features and big updates to Wunderlist, so as the app ages it’s become more difficult to maintain. As technology continues to advance, we can’t guarantee that Wunderlist will continue to work as it should, or as we’d like it to. With all our latest updates, we’re confident in To Do being the best alternative for Wunderlist now and so we believe it’s the right time to make the next move. Now, we want to dedicate all our time to growing that cross-suite experience that transforms how you achieve your goals and dreams.

What does this mean for you?

You can keep using Wunderlist while we keep supporting it. You’ll still be able to access your data – you can choose to export it, or import it into To Do. Of course, we’d love for you to continue your journey with Microsoft To Do.

After May 6th, your to-dos will no longer sync. For a period of time, you’ll still be able to import your lists into To Do. Starting today, we will no longer accept new Wunderlist sign-ups.

We know this is a lot to take in, so have a look at our FAQs for more answers to all your important questions.

Switching to To Do is a snap

If you’ve been with Wunderlist for a while, you probably keep a lot of your life in our app. You’ve planned it all – from groceries to holidays to birthday parties or maybe even your wedding. We’re here to help you pack that life up and ship it to To Do. We’ll even unpack at the other end. So, what are you waiting for? Hit that download button.

You can import your lists from Wunderlist to To Do in three steps:

  1. Download To Do on iOS, Android, Mac, or Windows. Or, open our web app.

  2. Sign in with your Microsoft account. If you don’t have a Microsoft account, you can create one with your preferred email address (including Hotmail, Yahoo, or Gmail). Or, if you have an Xbox, Skype, or Live account, you can use that.

  3. A pop-up will, uhh, pop up in To Do, directing you to our Wunderlist importer. Don’t see it? Find a link to the importer in your settings.

If you want to export your lists from Wunderlist, you can do that too.

Settle in and start to explore

We’re very excited about what the future holds for To Do. We’ve always wanted to bring our Wunderlist vision to a larger user base so we can help people with a variety of different list needs. With this in mind, we put an emphasis on being more accessible, more secure, and more interconnected – all within a beautiful, simple, and easy-to-use app. You can set up multi-factor authentication to help make your lists secure. And if you’re already part of the Microsoft ecosystem, you’ll love how you can flag an email in Outlook and see it as a task in To Do, or how Planner tasks assigned to you show up in To Do.

When you move to To Do, you’ll notice that our new design now looks and feels similar to Wunderlist: the layout is familiar, with list groups, steps, and more. We hope you’ll feel right at home with our beautiful and simple app, and now you can also turn the lights down low. That’s right - dark mode is now available on Windows, iOS, Mac, and Android.

One new feature to get acquainted with is My Day. We dreamed of a way of helping you to really focus on what you can accomplish in a day without getting overwhelmed by due dates – and overdue dates. Perhaps your day is full of meetings and you know that only two things will end up being checked off, leaving you feeling guilty about not getting everything done. My Day helps relieve that anxiety and puts the power back in your hands. We’ll show you upcoming tasks as suggestions, so you can decide what to focus on and add it to My Day. The best bit? The next day it wipes clean so you can start the planning ritual fresh each morning.

Want to see everything with a due date? Head to the Planned smart list. You can set it to only show you the Today section, so you can concentrate on what’s due each day.

And don’t worry, that oh-so-satisfying “ding” is still there when you check off a task.

Don't just take it from us

Want to see what others say before you dive in? Here are just a few of the many people who have moved over and never looked back.

A big thank you

Some of you have been on this journey with us since the very beginning. You may even remember the “Wunderkit” days. Whether you joined in 2010 or just last year, we want to take a moment to say a big thank you. You, our users, mean everything to us, and we hope that you continue to share our vision and join us on this next step of our journey. You helped us make Wunderlist what it is, and we’d love for you to help us do the same with To Do. Tell us what you love and what you’d like to see added or updated. With our latest additions – printing, smart due dates, and dark mode – you know that we always take your feedback into consideration when building new features. So go ahead – download the app and tell us what you think over on Twitter, Facebook, or Instagram. You can also keep up to date with all our new features on our To Do blog.