Shared posts

05 Sep 14:50

“Fracktivism: Corporate Bodies and Chemical Bonds” (Sara Ann Wylie) – my reading notes

by Raul Pacheco-Vega

I have read a ton of scholarly books, but I don’t think I’ve posted enough of my Twitter threads on which volumes I’ve found extremely interesting and helpful for my own research. This is the case with Sara Ann Wylie’s “Fracktivism: Corporate Bodies and Chemical Bonds”. An amazing book, to which I hope my Twitter thread and blog post do justice.

This is my summary of Dr. Sara Ann Wylie’s book:

05 Sep 14:46

West Pacific: Where We Drink

by Gordon Price
Rolandt

jj

Dalina, Main and East Georgia.

(By Calvin Taylor.  Click title for full image.)

05 Sep 14:38

How to divide a piece of paper into an odd number of sections?

by peter@rukavina.net (Peter Rukavina)

In yesterday’s edition of Monica Langwe’s bookbinding-focused email newsletter she mentioned the book The Art of the Fold:

I am glad to hold “The Art of The Fold” in my hands. The book, made together with her daughter Ulla Warhol, is tastefully created and a “must have” for all working with bookbinding in a creative way.

Thank you Hedi for sharing your knowledge. I always talk about  you in my courses and I am truly happy for having had the opportunity to work with you!

Per my new habit for finding good books, I checked the website for The Bookmark and found, to my surprise and delight, that they had a copy in stock. And that they closed in 20 minutes.

So I hoofed over to the store and bought it (sorry–it was the last one; more are on order).

There is a Techniques section early in the book, and the section “Dividing into an odd number of sections” caught my eye:

Detail from The Art of the Fold

This is something I’ve struggled with a lot in my bookbinding experiments, as when stitching a binding it’s very common to need an odd number of sections along the spine of the book.

But I found the explanation, especially the visual, in The Art of the Fold confusing because it shows a piece of paper that happens to neatly fit along the hypotenuse of a 5x5 triangle, and that’s often not the case.

What I realized is that it’s best to ignore the illustration, and follow the theory; here’s my experiment:

Dividing a piece of paper into five sections

The key to figuring this out for me was that the X axis can be ignored completely; it’s only the Y axis that matters.

So the bottom-left corner goes at 0,0, and the top-right corner is set to wherever it lands on the Y axis at the number of sections you want (in this case 5). Then the Y axis markings–1, 2, 3, 4– are used to mark the sheet into that number of sections.

My confusion or not, it’s a great hack, and worth the price of the book already.

I’m very excited about being able to dive deeper into The Art of the Fold.

05 Sep 14:34

cycle.travel has great cycle routing

by peter@rukavina.net (Peter Rukavina)

The Cotswolds-based cycle.travel site has an excellent description of itself:

Cycling is awesome. Amazingly, beautifully so. This humble machine, invented some 150 years ago, gets us to work, into town, and to see friends… faster, cheaper and plain more fun than the alternatives. It gets us to Britain’s best scenery entirely under our own steam. It gets us away from the daily grind.

Cycling gets us places. That’s why this site is called cycle.travel.

Not everyone wants to be Bradley Wiggins. There’s a lot of cycle.sport and cycle.performance on the web. We aim to be something different. For us, it’s not about the bike; it’s about the ride, and making better, more liveable cities and countryside with the bicycle as our chosen weapon.

The site also happens to have the best cycle-routing systems I’ve yet to come across, one that truly leverages the cycle-related data in OpenStreetMap to good end.

Compare, for example:

Here’s the routing via the ORSM routing algorithm from my house to the Charlottetown Farmers’ Market, one of two cycle-routing options on OpenStreetMap:

Map of OSRM route to the market

This route suffers for taking one out onto University Avenue, a very cycle-hostile street at the best of times, and particularly so north of Allen Street to Belvedere.

GraphHopper, the other OpenStreetMap cycle-routing option, is a little bit better:

Map showing GraphHopper route to the market

GraphHopper’s route avoids University Avenue and routes to the Confederation Trail, but that stretch of Allen Street from Walthen Drive to the trail is fraught with cycling pitfalls.

By contrast, here is the cycle.travel route, which happens to be the exact route that Oliver and I take to the market every Saturday:

Map showing cycle.travel routing to the market

Any routing algorithm involves a struggle to weight competing factors: should the route be the fastest? the one with the gentlest hills? the safest? the most scenic?

As such finding a routing algorithm that suits your particular tastes is much like finding a coffee you like drinking. I’m glad I found mine.

05 Sep 14:34

“Doubly Engaged Ethnography: Opportunities and Challenges When Working With Vulnerable Communities” – my IJQM article with @KateParizeau

by Raul Pacheco-Vega

A few months back, I wrote an explainer Twitter thread on “Doubly Engaged Ethnography: Opportunities and Challenges When Working With Vulnerable Communities”, the article Dr. Kate Parizeau (University of Guelph) and I published in the International Journal of Qualitative Methods (IJQM). I had explained that this article emerged as a result of a conversation Kate and I had back in 2013 on the ethics of doing ethnographic fieldwork in vulnerable communities. We both studied informal waste pickers in Argentina and Mexico, and this article is the first of several collaborations we have in the works.

It’s free to download and read from the IJQM website and definitely one of my favourite pieces.

05 Sep 14:34

Developing a Small-Scale Graph Database: A Ten Step Learning Guide for Beginners

Fred Cheyunski, Journal of Interactive Technology and Pedagogy, Sept 04, 2019
Icon

This post introduces the readers to graph databases by having them create and work with a grapoh database of their own. The database used, Neo4j, is a widely used and popular application. The example is based around the concepts of books and reviewers and the logic of the graph is built step-by-step to show that the database does more than just store information about books, but also helps the user identify relations getween them. Thisd is a good introduction if you want to explore the concepts further, but in addition to reading, doing the hands-on activity is recommended.

Web: [Direct Link] [This Post]
05 Sep 14:32

Apple Hits Restart on Game Controller Support

by John Voorhees

It’s hard to believe it’s been nearly six years since Apple added game controller support to iOS. The big news at WWDC in 2013 was the iOS 7 redesign, but for game developers, it was rivaled by the announcement that third-party Made For iPhone (MFi) controllers were coming.

The game press and developers understood the potential of controller support immediately. Even though it wasn’t announced there, Chris Plante of Polygon declared controller support the biggest story of E3, the game industry trade show that was happening at the same time as WWDC. Plante imagined that:

If Apple finds a way to standardize traditional controls, every iOS device will become a transportable console. In a year, both iPhones and iPads will approach the processing power of the current-generation devices. Companies will have the ability to port controller-based games for the mobile devices in millions of pockets — an install-base far greater than they’ve ever had before.

Game industry veteran Gabe Newell, the co-founder of Valve, saw Apple’s entry as a big risk to companies making PC and console games:

The threat right now is that Apple has gained a huge amount of market share, and has a relatively obvious pathway towards entering the living room with their platform…I think Apple rolls the console guys really easily.

I was right there with them. iOS devices couldn’t match the power of a traditional console in 2013, but you could see that they were on a trajectory to get there. With the addition of controller support, Apple felt poised to make a meaningful run at incumbents like Sony and Microsoft.

It didn’t work out that way though. iOS’ controller support was rushed to market. Early controllers were priced at around $100, in part because of the requirements of the MFi certification, and they couldn’t match the quality of controllers from Sony and Microsoft.

As anticipated, controller support was extended to the Apple TV when its App Store launched in 2015. Initially, it looked as though Apple would allow game developers to require a controller. In the end, though, the company went an entirely different direction by requiring that games support the Apple TV Remote, a decision that complicated development and dumbed down controller integration to match the remote’s limited input methods. Apple changed course eventually, and now lets developers require controllers, but by the time of that change the damage had been done. Many developers had already lost interest in controller support. It didn’t help either that for a very long time, the App Store didn’t indicate which games were compatible with MFi controllers, leaving the void to be filled by third-party sites.

Last year, when I looked back at the history of games on the App Store for its tenth anniversary, I came away pessimistic about the future of games on Apple’s platforms. After a decade, I felt like we were still asking the same question that Federico posed in 2013:

Will Apple ever develop a culture and appreciation for gaming as a medium, not just an App Store category?

Sadly, Federico’s question remains as relevant today as it was six years ago. Still, I’m cautiously optimistic based on what’s happened in the past year. Part of that is the App Store editorial team’s excellent track record of championing high-quality games in the stories published on the App Store. Another factor is Apple Arcade, the game subscription service we still don’t know a lot about, but which appears designed to showcase high-quality, artistically important games.

The latest cause for optimism is Apple’s announcement at WWDC this past June that iOS, iPadOS, tvOS, and macOS would all support the Sony DualShock 4 and Bluetooth-based Xbox controllers when Apple’s OSes are updated this fall. The reaction from developers and other observers was a combination of surprise and excitement that was uncannily similar to the MFi announcement in 2013. Yet, the news begs the question: ‘How is this time any different?’ The answer to that question lies in how the new controllers work and the role they will play in Arcade.

There’s an elegance to the simplicity in what Apple has done to support Sony and Microsoft’s controllers. Modern controllers vary by manufacturer, but Sony’s and Microsoft’s share roughly the same button layout and functionality as MFi controllers. By abstracting away the differences between each controller in its Game Controller framework, Apple has designed a system that makes each controller equally easy to use.

Connecting

Only one controller can be connected at a time.

Only one controller can be connected at a time.

Like existing MFi controllers, Sony’s DualShock 4 controller and Microsoft’s Xbox One S and upcoming Elite 2 controllers work over a wireless Bluetooth connection. Each controller has a slightly different process for initiating Bluetooth paring, but from the standpoint of Apple’s hardware, it’s no different than pairing a Bluetooth keyboard or mouse. For Apple TV owners who may not have connected a Bluetooth device to that system before, Apple even has instructions for the DualShock 4 and Xbox One S controllers as well as the Nimbus and Stratus MFi controllers made by SteelSeries.

tvOS 13 will provide users with help connecting their controllers.

tvOS 13 will provide users with help connecting their controllers.

To connect the Xbox One S controller for the first time, you press the connect button on the front edge of the controller and wait for the Xbox logo to blink. On the DualShock controller, hold the PS and Share buttons down at the same time until the light bar on the front of the controller blinks. When the controller shows up in the devices section of whichever Apple OS you’re connecting to, tap or click on it to start the pairing process. The next time you want to connect a controller, it will already be in your list of Bluetooth devices, so all you need to do is press the PS or Xbox buttons on your controller to reconnect.

Microsoft's Xbox One S controller.

Microsoft’s Xbox One S controller.

With Xbox controllers, it’s important to keep in mind that the only models that work with Apple devices are model number 1708, which is the one that comes with the Xbox One S, and the Xbox Elite 2 controller that was announced at E3 this year. Previous Xbox controller models don’t use Bluetooth and won’t work.

Once you connect a controller, you cannot connect another without disconnecting the first one because only one controller can be connected at a time. Also worth noting, with iOS and iPadOS 13.1, the Batteries Today widget will track your controller’s remaining battery power. Not even the PS4 or Xbox consoles report the exact battery percentage remaining like iOS and iPadOS 13.1 will.

iOS 13.1's Batteries widget will report the battery remaining for your controllers.

iOS 13.1’s Batteries widget will report the battery remaining for your controllers.

Interestingly, the DualShock 4 controller also works with the iPad Pro tethered by a USB-C to Micro USB cable, but unfortunately, the sound doesn’t play. I tried Bluetooth headphones too, and still, there was no sound until the USB cable was unplugged. When I tried the same thing with my Xbox One S controller, it didn’t work at all. I don’t recommend playing with a tethered DualShock 4 controller, but the fact that it works, even a little bit, tells me that more can be done by Apple to expand the compatibility of these controllers with its hardware. It also highlights the versatility of USB-C, which can’t come fast enough to the rest of Apple’s iPads and the iPhone.

Sony's DualShock 4 controller.

Sony’s DualShock 4 controller.

A humorous side effect of the interchangeability of Xbox One S and DualShock 4 controllers is that it gives players an indirect way to play their favorite PS4 games with an Xbox controller. PS4 Remote Play lets users stream games from a PS4 to an iOS device, and with controller support coming in the fall, you’ll be able to play those games with Sony’s or Microsoft’s controller. Of course, it’s indirect because you need to stream your PS4 games to an iOS device, but it serves to emphasize further the degree to which Apple’s Game Controller framework renders the hardware differences between controllers meaningless.

The Controls

Although the layout of controls on the DualShock 4 and Xbox One S controllers and their button labels are different, the experience of using them with Apple devices is remarkably similar. Apple has made every button, thumbstick, and trigger available for developers to use, except the DualShock controller’s PS and the Xbox One S controller’s Xbox button. Apple has accomplished this by abstracting away the differences between each controller in its Game Controller framework.

Developers can detect connected controllers and display the appropriate button labels or use their positional equivalents.

Developers can detect connected controllers and display the appropriate button labels or use their positional equivalents.

Controls are mapped to their positional equivalents so they correspond to the control layout of the MFi controller specification. For example, the DualShock’s ‘X’ and circle buttons map to the A and B buttons of an MFi controller. It’s a simple, but effective scheme that means any game that already uses the Game Controller framework to support MFi controllers will automatically work with the DualShock 4 and Xbox One S controllers when Apple’s updated OSes are released. I’ve tried several games with MFi controller support and am happy to report that in every case the games’ controls worked as I expected regardless of which controller I was using.

Apple has APIs for developers to detect if a controller is connected and display the correct button labels for things like in-game tutorials and hints. Alternatively, more generic graphics that correspond to the positional mapping of the controls can be used. Dead Cells is a good example of a recent game that takes the positional approach. It may be a while before games update their controller code, but even with something like Fez, which displays the MFi-equivalent button labels, it’s not that hard to figure out which buttons to press.

Dead Cells is a good example of a game that uses generic positional mapping of controls for in-game hints.

Dead Cells is a good example of a game that uses generic positional mapping of controls for in-game hints.

In contrast, Fez uses the MFi controller spec's labels, which don't correspond to the DualShock 4 controller.

In contrast, Fez uses the MFi controller spec’s labels, which don’t correspond to the DualShock 4 controller.

The Importance of Sony and Microsoft Controllers

From the perspective of users, the differences between MFi controllers and Sony and Microsoft’s offerings are minimal. However, that doesn’t necessarily mean the new integrations will suffer the same lackluster adoption as MFi controllers have. Although I remain skeptical, there are a couple reasons that lead me to believe this time things could be different.

The biggest reason is simple: millions of people already own Sony and Microsoft controllers. On top of that, Sony and Microsoft make well-designed, high-quality controllers. Gamers have their preferences between the two, but the build quality of each is undeniable and surpasses existing MFi controllers.

Sony has sold over 100 million PS4s and although Microsoft hasn’t reported Xbox One sales numbers since 2015, the conventional wisdom is that tens of millions have been sold. That’s a big existing install base of controllers.

By adding support for Sony and Microsoft controllers, Apple is increasing the number of potential users by a substantial order of magnitude, which, in turn, has the potential to attract more developers too.

The SteelSeries Nimbus controller.

The SteelSeries Nimbus controller.

In contrast, the MFi controller market today is downright anemic. When they first debuted, the trouble with MFi controllers was that they were an additional purchase on top of an iOS device or Apple TV that cost as much or more than a console controller, but without the same build quality. Add to that the fact that early developer support was spotty, and purchasing another controller to use with Apple devices wasn’t very compelling.

One of the best of the lot is the SteelSeries Nimbus controller, which debuted in 2015. It’s a good controller, though it’s a little chunky for my tastes and not as solidly built as Microsoft and Sony’s offerings. The Nimbus connects via Bluetooth, has a Lightning connector for charging, and a $60 list price. That’s a little less than the list prices of Sony and Microsoft’s controllers, but those controllers can often be found on sale for less than the Nimbus. Also, four years after its release, SteelSeries still hasn’t released a follow-up to the Nimbus, even though it has released newer options for Android devices.

I’ve had a SteelSeries Nimbus controller for four years and have hardly ever used it, and when I do, it usually needs charging. It’s not a terrible controller, but it’s the kind of accessory that winds up in the bottom of a drawer and is hard to find. In contrast, we’ve had a PS4 in our home just about as long, and there’s always a controller that’s charged up and within easy reach because the PS4 gets used regularly. I suspect that is true for many people and that the familiarity and ability to use a PS4 or Xbox controller without incurring the cost or clutter of an MFi controller will create demand for controller support among users.

I suspect Apple Arcade will help too. Arcade, which will debut this fall, is Apple’s Netflix-like game subscription service that will offer a curated collection of over 100 games at launch. Those games will work across iOS, iPadOS, tvOS, and the Mac. Based on what we know today, it doesn’t appear that controller support will be required for Arcade games. Nonetheless, having a unified framework for controller support and a collection of games that work on all of Apple’s devices should make supporting controllers more attractive to developers of the sort of new, premium games included in Arcade. If a critical mass of high-quality games works with controllers, that could also pressure developers of other games to follow suit.

What Will Become of MFi Controllers?

Existing MFi controllers will continue to work on Apple’s platforms, although the outlook for the already-beleaguered category isn’t bright. With its announcement, Apple eliminated the need for millions of gamers to ever consider an MFi controller. After all, most MFi controllers don’t offer anything more than the cheaper, better alternatives from Sony and Microsoft.

The Gamevice iPad mini controller.

The Gamevice iPad mini controller.

The one exception may be controllers like the Gamevice line that integrates with iPhone and iPad hardware. Gamevice controllers split the physical controls in half, a little like a Nintendo Switch. The Gamevice wraps around an iPhone or iPad and uses its Lightning connector for power and passing controller input to games. I’ve used the iPad mini version of the Gamevice, and it’s an excellent way to play landscape-oriented games, though I don’t think I’d want to use it with any device bigger than the mini.

I’ve also tried a controller from Rotor Riot because until iOS 13 is released, it’s the only controller that has clickable thumbstick buttons. Rotor Riot is a drone accessory maker, and its controller can be used to navigate drones as well as play games. The controller comes with a bracket for your iPhone that sits above the controller as you play. The quality of the controller is similar to the Nimbus, but I don’t like playing games with my iPhone perched above the controller because it makes the whole setup top-heavy and tiring to use for very long. Like the Gamevice, the Rotor Riot controller operates over a Lightning cable that’s attached to the device, making it an alternative that from a practical standpoint only works with an iPhone.

An Unofficial Alternative

The 8BitDo SN30 Pro+ works with an iPad when connected with a USB-C cable.

The 8BitDo SN30 Pro+ works with an iPad when connected with a USB-C cable.

Interestingly, there’s a wired alternative for the iPad Pro that Apple hasn’t mentioned anywhere. 8BitDo recently released a controller called the SN30 Pro+ that charges via USB-C. I’ve tried a few of 8BitDo’s controllers in the past for things like my SNES Classic Edition, but the SN30 Pro+ controller is on a different level altogether. The controller’s thumbsticks, triggers, and vibration can all be adjusted with a Windows app, and buttons can be remapped and programmed to play macros. On top of that, the SN30 Pro+ is incredibly well-built, balanced, and works with a ton of different systems.

8BitDo advertises the SN30 Pro+ as compatible with the Switch, Android, macOS, Steam, and Raspberry Pi. The SN30 Pro+ can’t connect over Bluetooth to iOS devices or the Apple TV, but it turns out that it does work as a fully-functional wired controller for the iPad Pro when connected with a USB-C cable. According to support emails posted by Reddit users, 8BitDo says it is working on wireless compatibility too.

If you’re looking for a general-purpose controller that works with a lot of different systems including the Nintendo Switch, the SN30 Pro+ is a fantastic option. The build-quality surpasses that of the MFi controllers I’ve tried, and it’s customizable for people interested in that. Although I’d prefer to play wirelessly with my SN30 Pro+ on both iOS and the Apple TV, playing wired on an iPad Pro propped up in a Smart Keyboard Folio case is a better experience than I expected.

With the assistance of a USB-C hub, you can even charge your iPad Pro while playing with a wired SN30 Pro+ controller. I connected my HyperDrive Slim 8-in-1 hub to my iPad Pro. Then, I used a USB-A to USB-C cable to connect my SN30 Pro+ to the hub and a USB-C to USB-C cable to connect an external battery to the hub. It’s a neat little hack that is handy for battery-hungry games. Unfortunately, the same setup does not work with iPads that have Lightning connectors.


Controllers are just one part of the new direction Apple is taking gaming on its platforms. The first step was to separate games from other apps when the App Store was redesigned with iOS 11. Last year iOS 12.1 added L3 and R3 thumbstick button support. Now, Arcade is splitting off a curated subset of games that can be played in the living room, on the desktop, and on the go. It’s a unified, broad-based approach to games unlike any of Apple’s previous efforts.

Toward the end of researching this story, I started playing Dead Cells on iOS, which I reviewed last week. It’s a fantastic game that’s been out for a while on the desktop and consoles. It’s also exactly the type of game that would fit well on all of Apple’s devices, not just the iPhone and iPad. For now, however, it doesn’t support the Apple TV, and although you can play it on a Mac, it’s not available on the Mac App Store. I hope Arcade will help bring games like Dead Cells to all of Apple’s platforms. The prospect of having games I love with me whether I’m at home on the couch, at my desk, or on the go is exciting.

Arcade feels like a reboot of the strategy that seemed to be coming with iOS 7 back in 2013 but never materialized. Apple still isn’t selling a controller with the Apple TV, but this time it’s not trying to convince us that the Siri Remote is a controller. Instead, it’s got real game controllers and a strategy to integrate high-quality games not just with the Apple TV experience, but across all of its devices. Part of the glue that can hold the experience together across every device is the controller you use, and with DualShock 4 and Xbox One S controllers, Apple finally has first-rate controllers to fill that role.

I hope this time will be different. Perhaps it’s not too late for Apple to take on the console makers as Gabe Newell suggested it would in 2013. Ironically, this time around, if Apple is able to pull it off, it will be using its competitors’ own hardware against them.

My skepticism about Apple and gaming remains strong, but for the first time in a long while, there is cause for cautious optimism. Apple no longer seems satisfied with the status quo of earning 30% from In-App Purchases in casual, touch-based free-to-play games. With Arcade and third-party controller support, the pieces are in place to make Apple’s devices a worthy competitor to desktop PCs and consoles as a place to play the best creative work from today’s top game developers. I just hope the prospect of a new service revenue stream from Arcade is enough to encourage Apple to follow through this time.


Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it’s also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it’s made in Italy.

Join Now
05 Sep 14:29

USB 4 Arrives with Faster Transfer Speed, Thunderbolt 3 and 100W Charging Support

by Mahit Huilgol
USB 4 has been released and the latest tech ushers in some great improvements over its predecessor. The USB 4 was first announced earlier this year and is now set to replace USB 3.2. Highlights of USB 4 include better data transfer speeds, backward compatibility and much more. Continue reading →
05 Sep 14:29

The Best USB-C Laptop and Tablet Chargers

by Nick Guy
The Best USB-C Laptop and Tablet Chargers

While it used to be difficult and expensive to buy a new laptop charger, computers with USB-C charging have made replacements easier to get and more affordable than ever. The best choice for almost any modern tablet or laptop is ZMI’s zPower Turbo 65W USB-C PD Wall Charger. This adapter is just as powerful and reliable as a replacement from your laptop’s manufacturer, and it’s smaller than almost any other we’ve seen. It even comes with its own USB-C charging cable, making it a particularly great value.

05 Sep 14:29

Chair Advice

by peter@rukavina.net (Peter Rukavina)

I have been typing professionally for a living for almost 40 years, and along the way I’ve learned a thing or two about ergonomics (enough to realize I have a lot more to learn).

Today I got a message from an ailing friend looking for advice for purchasing a better working chair, and this is how I replied:

  • It’s as much about the software as the hardware. In my experience, $500 on a chair + $500 on an ergonomics expert to advise (and show you how to use the chair) is a better long-term investment than $1000 on a chair.
  • There is no better antidote than not working so much. No chair has been invented that will make working a 12 hour day with no breaks possible.
  • It might not be the chair: the keyboard, the monitor, your mousing device, and their relative positions, can all contribute to unexpected pain in unexpected places.
  • You need to spend the $500 on the ergonomics expert every year because you will forget everything they tell you.

I’ve be using my current desk chair since 2010; it comes from Chairs Limited in Dartmouth, a firm that has the advantage of being able to customize extensively.

My Desk Chair

05 Sep 14:28

Naming iPhones

by Neil Cybart

Over the years, iPhone naming has had its ups and downs. There were the awkward names like iPhone 3GS and iPhone XS Max, and then there were strong industry-defining names like iPhone X. Based on the latest rumors, Apple appears to be in the early stages of moving away from an annual iPhone naming cadence altogether.

History

Most iPhones have an interesting story when it comes to nomenclature.

iPhone (2007). By going with “iPhone,” Apple relied heavily on consumers making the mental connection between a “breakthrough internet communications device” and a traditional cell phone. By relying on existing connotations, the iPhone sales pitch was made that much easier. Apple went on to use a similar naming strategy with Apple Watch.

iPhone 3G (2008). In retrospect, this may have been the most surprising iPhone name to date. Apple called the first update of its breakthrough mobile product after an industry term: 3G. The decision also spoke volumes about what drove iPhone adoption out of the gate. An iPhone with faster cellular connectivity was positioned as a key factor in customers' purchase decisions.

iPhone 3GS (2009). This is when things got weird with iPhone nomenclature. As explained by Apple’s Phil Schiller at the time, the “S” in iPhone 3GS stood for speed. The naming decision demonstrated how something that is now viewed as trivial - processor speed - was positioned as a key selling point for an early iPhone.

iPhone 4 (2010). Apple entered an iPhone naming scheme that would go on to last for years: a whole number characterized by a cosmetic redesign was followed by an “S” version the following year with more in the way of internal upgrades.

iPhone 4s (2011). An iPhone 4 with internal improvements. The “S” cycle provided Apple a few benefits. By sticking with the same overall iPhone design for more than a year, Apple was able to ramp up iPhone production quickly, and at a lower price, for “S” launches. The “S” cycle also reflected how consumers bought iPhones at the time. In the U.S., mobile carriers subsidized iPhones for $199 with the purchase of two-year contracts. The remaining price of the iPhone was recouped through higher monthly charges for data, texts, and service. This led to a two-year iPhone upgrade cycle and people choosing to either be on the whole number or “S” upgrade path.

iPhone 5 (2012). Arguably, this was the least noteworthy naming decision as Apple simply followed the existing pattern of using a whole number following the “S” model to denote a major cosmetic change. The iPhone 5 was the first iPhone to have a 4-inch screen.

iPhone 5c / 5s (2013). Apple faced its first major naming dilemma. In an effort to boost iPhone 5s sales and to maintain iPhone margins, Apple chose to replace the iPhone 5’s outer shell with lower cost colorful polycarbonate shells. Apple called this new model the iPhone 5c, and “c” presumably stood for color. Pundits ended up looking at the “c” as standing for cheap given the iPhone 5c’s lower price relative to iPhone 5s.

iPhone 6 / 6 Plus (2014). For the first time, Apple introduced two new flagship iPhones simultaneously - one with a 4.7-inch screen and the other with a 5.5-inch screen. Apple went with “Plus” for the model with the larger screen. The name worked as there was no other major difference between the two iPhone models aside from screen size (and battery life).

iPhone 6s / 6s Plus / SE (2015). This is where the iPhone “S” cycle began to lose much of its meaning from a feature and product development perspective. While the market would continue to look at “S” years as refinement years, Apple began to shift iPhone development so that every iPhone update contained a handful of useful new technologies and features. As for iPhone SE, Apple introduced a new model containing components from a few prior flagship iPhones in March 2016 with “SE” meaning special edition.

iPhone 7 / 7 Plus (2016). As was the year of iPhone 5, this was another uneventful year for iPhone naming. Apple followed the logical next step in an iPhone naming scheme that had been used for the previous six years.

iPhone X / 8 / 8 Plus (2017). iPhone naming seemed to cross the point of no return. Apple decided to call the first iPhone lacking a front-facing home button, something that had been rumored for years, iPhone X. For iPhone 8 and 8 Plus, instead of sticking with the “S” cycle, Apple skipped ahead to the next whole number. The decision was meant to have the new models be perceived as more advanced than what may have been implied with an “S” nomenclature.

iPhone XS / XS Max / XR (2018). Last year’s flagships were the most confusing from an iPhone naming perspective. Apple followed three general guidelines:

  1. Apple reverted back to the “S” playbook to denote the model after a major redesign (iPhone X). Instead of this decision implying the return of the “S” cycle, in my view, Apple simply wanted to stick with the X branding for one more year.

  2. “Max” was used to distinguish the larger iPhone model from its smaller XS sibling.

  3. “R” was used for the lower cost alternative to the two iPhone XS flagships. According to Schiller, the “R” didn’t stand for anything, although some thought it stood for the model having a Retina display while “S” stood for Super Retina. Schiller mentioned that the letters R and S are used in the auto space to highlight special models.

The 2019 Lineup

Given Apple’s decision to go with the iPhone X naming scheme in 2017 and 2018, there aren’t too many logical paths that iPhone naming could follow unless Apple wants to try something completely new. There are two obvious choices:

  • iPhone XI (continue using roman numerals). Apple would say it’s pronounced “iPhone eleven” but everyone would call it “iPhone ex I.” While roman numerals could work if Apple were selling one flagship model, the fact that Apple has three flagships in the line would produce some confusion. Names like XI Max, XI Pro, or XIR aren’t as strong as the powerful iPhone X.

  • iPhone 11. This is a much simpler naming track. It makes more sense for Apple to use 11 than 12 as the implication is that the new iPhones are follow-ups to the iPhone X series. However, there is precedent for Apple to skip whole numbers as seen with the lack of iPhone 2 and iPhone 9.

The latest rumors point to Apple unveiling three new iPhones next week:

  • iPhone 11 Pro Max (successor to iPhone XS Max)

  • iPhone 11 Pro (successor to iPhone XS)

  • iPhone 11 (successor to iPhone XR)

The “Pro” would signal Apple wanting to draw attention to the differentiation between the two high-end flagship models and the lowest cost flagship. Apple has typically liked to use “Pro” to reflect models with greater specifications and capability as seen with iPad Pro, MacBook Pro, iMac Pro, and Mac Pro. In order to distinguish between the two “Pro” models, Apple would continue to use “Max” to denote the model with the largest screen.

Observations

Naming iPhones is more art than science. For example, the “R” in iPhone XR likely was chosen simply because it looked and sounded better than other letters. Meanwhile, Apple’s decision to go with “X” was likely heavily based on marketing, both in terms of marking the iPhone’s tenth anniversary and the fact that it looks cool and powerful from a branding perspective.

However, there is no question that iPhone nomenclature has been losing some of its usefulness and utility. Consumers are now routinely mispronouncing or misidentifying iPhone names and it’s easy to see why: iPhone XS Max doesn't exactly roll off one’s tongue. A similar issue will be seen if Apple ends up going with “iPhone 11 Pro Max.” People are increasingly saying “the new iPhone” or “the biggest one” when referring to the latest flagships.

It’s difficult to say if these changes have had a negative impact on iPhone sales. The iPhone business is being impacted by a number of developments including a longer upgrade cycle as people become content with what they currently have. It’s doubtful that a particular letter or number in the name would entice users to upgrade en masse to a certain iPhone model.

Future Considerations

Based on my calculations regarding the iPhone upgrade cycle, the next marginal iPhone buyer will likely hold on to his or her iPhone for a little over four years before upgrading. As the iPhone upgrade cycle continues to extend, the case for coming up with new iPhone naming every year declines.

There are subtle clues that suggest we may be approaching the point when Apple will do away with the annual iPhone naming cadence altogether. This wouldn’t mean that Apple would just go with “iPhone.” Instead, Apple would still need a way to distinguish iPhone models with different sizes and capabilities. In such a case, one likely option would be:

  • iPhone Pro (iPhones containing the most capability)

  • iPhone (the middle of the road option for the masses)

  • iPhone mini (the iPhone containing the smallest screen and fewest features)

One possibility is that Apple will expand the iPhone line to include more than three flagship models. For example, an updated iPhone SE would be a prime candidate for “iPhone mini.” Meanwhile, a larger model in the $699 to $799 range would make sense for “iPhone” while Apple would have more than one “Pro” model at the high end of the line. This naming strategy would be similar to how Apple currently names iPads: two “Pro” models, iPad mini, and iPad. (The iPad Air is positioned as a lower-cost iPad Pro.)

As for timing, 2021 may be a good bet for the earliest point when Apple would use this iPhone nomenclature. Consider the following:

  1. By 2021, “Pro” will have likely been used in iPhone nomenclature for two years.

  2. The latest rumors have Apple unveiling both a flagship iPhone with a smaller screen and an updated iPhone SE in 2020. These models would make it easier for Apple to use a simpler iPhone naming scheme. As it stands now, it would be difficult for Apple to position the middle-priced $999 model as just “iPhone.” Instead, the iPhone XR has been the best-selling model.

  3. In the event that Apple plans on sticking with whole numbers for iPhone naming, the 2021 iPhones would potentially be called iPhone 13 - one of the more superstitious numbers in existence. Apple could use that occasion as a way of moving past numbers and letters altogether.

Receive my analysis and perspective on Apple throughout the week via exclusive daily updates (2-3 stories per day, 10-12 stories per week). Available to Above Avalon members. To sign up and for more information on membership, visit the membership page.

05 Sep 14:27

Notion.io is my new favorite - nathanb



Chris Thompson wrote:
I thought that part of the review was fair actually. The lack of
>simple/basic tables in Notion is one of the points that gets brought up
>again and again on social media. Multi-column layouts with a small
>number of columns are easy (at least on the desktop Notion; they're not
>shown as columns on mobile), and embedding a tabular database into a
>Notion page is easy. But if you just want a simple word processor-style
>table rather than an embedded tabular database, there's no support for
>that. It's powerful but overkill.
>
>nathanb wrote:
>


You are right and this is one of my personal gripes with Notion, that I wish it could do simple grid tables as OneNote's tables as a separate feature than it's databases. Though that's probably more about my habits, as Notion's ability to create column and row content areas on a page is functionally more similar to simple tables than its databases are. It's absolutely a valid distinction to make and people can argue if a database/simple-table approach fits better for them. But they first need to realize that tables and databases are fundamentally different.

She might as well have compared Solidworks to Paint on the basis of cropping a photo.
05 Sep 14:26

On the importance of taking notes (for students) - jaslar

I don't see many undergraduates on this list, but hey, you never know. If so, here's an article about what does and doesn't work about taking notes for classes.

https://qz.com/1701631/how-to-take-better-notes/?utm_medium=10today.media.20190904.smartflab.436.1&utm_source=email&utm_content=article&utm_campaign=10-for-today---4.0-styling

Of interest, maybe, because we're all students.
05 Sep 14:26

At least 130 losing jobs as Interfor announces closure of century-old Maple Ridge mill

mkalus shared this story .

Interfor has announced plans to permanently close its Hammond Cedar Sawmill in Maple Ridge, B.C., by the end of the year, the latest in a growing list of mill closures to rattle the province amid an industry slump.

The company said in a statement the mill has been working at half capacity for "some years" as the province's forestry industry grapples with "significant log supply challenges."

Duncan Davies, Interfor's president and CEO, also said "cedar producers have also been disproportionately impacted" by duties on softwood shipments into the United States.

Davies said the company, which has 18 mills across North America, will seek jobs for the affected workers at its other operations or at outside mills.

The United Steelworkers Union said 130 of its members are losing their jobs with the site closed, plus dozens of additional contractors dependent on the mill for their business.

"It's devastating on our members ... it's probably closer to 200 people that will be affected," said Al Bieksa, union president.

Bieksa said the union had been in bargaining with Interfor since early June, but alarm bells went up when the company stalled. In August, the union's members at Hammond and at Interfor's Acorn mill in Delta voted 97 per cent in favour of strike action.

"Their resistance to bargaining led us to believe something was coming down the pike ... We were taken off guard that it's going to happen so quickly."

The current mill in Maple Ridge was built in 1963, but the first mill on the site dates back to 1908. Interfor said the closure is expected to happen before the end of the year, once the mill's existing inventory is processed and shipped out.

The statement said the company plans to "reorganize" its operations in order to spend elsewhere, including at its Acorn sawmill in Delta, B.C.

Various companies have announced nearly two dozen mill closures and short-term shutdowns at mills across B.C. this year, bringing hundreds of layoffs.

Workers thrown out of jobs in the struggling sector, along with contractors and those who govern municipalities dependent on forestry, are grappling with the fallout.

"We're just talking about what we're going to do, how we're going to get through life and prepare for the future," said Randy O'Leary, who's been at the mill for 36 years. 

David Richardson, who's worked there for 47 years, said he wants the government to take action. 

"Mills are going down," Richardson said. 

"Unless the government steps up to the plate and says stop sending raw log exports … it's killing us." 

B.C. Forests Minister Doug Donaldson said Thursday he understands the stress and turmoil to workers layoffs bring but said the upheaval isn't limited to the province.

"Overall, there's challenges globally to our B.C. forest industry as well as within B.C. with our log supply," the minister said. "We've seen the writing on the wall for quite a while and we're taking action legislatively."

Donaldson said the province is co-ordinating with Interfor to provide assistance to those who have been laid off, some of which will come from the company and others provincially and federally.

The minister says those laid off in Maple Ridge may have an "abundance" of options for new employment in the populated Lower Mainland, whereas options are "reduced" in the Interior and remote areas.

The reasons for the downturn are varied. A lack of supply and volatile markets were blamed in other shutdowns.

Poor market conditions and log shortages due to outside forces such as the mountain pine beetle and wildfires are causing mills to shutter, leaving mill towns scrambling to adapt and search for new ways to diversify their economies

"The whole industry is shrinking," David Elstone, executive director of The Truck Loggers Association, told CBC News in July. 

"That means that some contractors will likely not be working here once we get through the storm."

05 Sep 14:25

Endlich verfügbar :: Eve Extend Bluetooth Range Extender

by Volker Weber

d1874185aafaf74e993eeb31b32ee6a7

Bluetooth Range Extender für Apple HomeKit-fähige Eve-Geräte, erhöht die Reichweite. Nur für Eve!

More >

05 Sep 14:24

Amazon launches Fire TV Cube in Canada, unveils new Fire TV soundbar

by Jonathan Lamont

Amazon is finally bringing its Fire TV Cube to Canada.

The American e-commerce giant announced that it would launch the Fire TV Cube in Canada alongside the announcement of a new Fire TV Edition soundbar.

The Fire TV Cube is a hands-free Alexa-enabled Fire TV experience. Amazon boasts that it’s the fastest and most-powerful Fire TV ever.

The Cube supports Dolby Vision and 4K Ultra HD (UHD) content up to 60fps. It also supports HDR and HDR 10+. Further, its hexa-core processor powers apps like Netflix, YouTube, Prime Video and Crave, along with websites like Facebook and Reddit (accessible through built-in Firefox and Amazon’s Silk browser).

On top of all this, the Fire TV Cube features far-field voice control that lets you control your TV through Alexa. Users can navigate the Cube interface using voice commands, or simply ask Alea to play a show and it’ll pick up where you left off.

Far-field voice recognition relies on eight microphones with advanced beamforming technology, which combines signals from each microphone to suppress noise, reverberation, currently-playing content and other things that may compete with your voice.

Of course, the Fire TV Cube also supports popular Alexa features like Multi-Room Music, Alexa Communication, Follow-up Mode and more.

Nebula Soundbar -- Fire TV Edition

Along with the Fire TV Cube, Amazon also unveiled the first Fire TV Edition soundbar. Similar to Fire TV Edition smart TVs, the soundbars include Fire TV and provide a smart TV experience, even on not-so-smart TVs. Anker partnered with Amazon to launch the first Fire TV Edition soundbar: the Nebula Soundbar — Fire TV Edition.

The soundbar will turn any TV into a smart TV with Alexa voice control and the Fire TV interface. Nebula also supports 4K UHD and Dolby Vision.

The Fire TV Cube is now available for pre-order in Canada for $149.99. The Cube will ship beginning October 10th and comes with an IR extender cable and an Ethernet adapter. Fire TV Cube is also available in the U.S., U.K., Germany and Japan. You can learn more about the Cube here.

Amazon also offers a bundle with the Fire TV Cube and a Ring Video Doorbell 2 that costs $358.99 (about $40 off).

The Nebula Soundbar — Fire TV Edition is also available for pre-order today at $269.99. It will begin shipping on November 21st. The soundbar is also available in the U.S., U.K. and Germany.

Source: Amazon Canada

The post Amazon launches Fire TV Cube in Canada, unveils new Fire TV soundbar appeared first on MobileSyrup.

05 Sep 14:24

Apple might launch new iPhone SE in spring 2020

by Shruti Shekar
iPhone SE

Apple might be planning to launch a new mid-range successor to 2016’s iPhone SE.

The tech giant could release the new phone next spring a Nikkei report indicated, which added that Apple wants to improve sales in markets like China and India, where Huawei, Samsung and Xiaomi have a more significant presence.

The new model will reportedly be very similar to the iPhone 8 released back in 2017. The phone will most likely feature an LCD display and “most of the same components” as this year’s flagship iPhones.

The phone will also feature a single-lens rear camera and 128GB of storage, according to Nikkei’s report.

Rumours surrounding Apple working on an iPhone SE 2 have been circulating for years now.

Source: Nikkei Via: The Verge

The post Apple might launch new iPhone SE in spring 2020 appeared first on MobileSyrup.

05 Sep 14:24

FCC filings suggest new Surface Mouse, Keyboard could be on the way

by Jonathan Lamont
Surface Keyboard

Microsoft could have new Surface accessories in the works, according to recently uncovered FCC filings.

Spotted by WindowsBlogItalia, the filings don’t include many details about the new accessories beyond that they are a wireless mouse and keyboard. However, the filings do include testing dates in July and August for the mouse and keyboard respectively.

Both the keyboard and mouse filings include the date the applications were received and confidentiality statements. These statements request certain details not be revealed, with one specifically requesting details remain secret until February 10th, 2020.

It’s an interesting date, as that could mean the new Surface Keyboard and Mouse may come out long after the upcoming Surface Event in October. Alternatively, it could merely be Microsoft acting cautiously and securing a window to announce and launch the devices.

Further, Windows Central points out that Microsoft hasn’t updated its Surface Ergonomic Keyboard or its Surface Mouse in a long time, so these could be the new accessories.

Whatever the situation, we’ll likely learn more in October.

Source: FCC, (2) Via: Windows Central, WindowsBlogItalia

The post FCC filings suggest new Surface Mouse, Keyboard could be on the way appeared first on MobileSyrup.

05 Sep 14:22

Apple’s ‘one more thing’ moment could be a new A12 chip-powered Apple TV

by Patrick O'Rourke

With only a few days to go until Apple’s fall hardware keynote, leaks surrounding the event continues to appear at a rapid pace, with the latest rumour surprisingly not being reported until now.

As first uncovered by MacRumors, @never_released, a frequently reliable Twitter account that has a history of leaking accurate codenames of unreleased hardware, claims that an Apple TV refresh could be the tech giant’s big ‘one more thing’ moment at its fall keynote.

The new Apple TV, identified as ‘Apple TV11.1,’ also has the codename ‘J305’ attached to it. As you may have already guessed, this new version of the Apple TV brings the A12 processor to Apple’s base level set-top box. MacRumors says that it uncovered both the identifier and model number in an internal iOS 13 build the publication has been parsing through for the last few weeks.

Apple’s 4K Apple TV features the same A10X processor included in the iPad Pro (2017). @never_released also confirmed that the new Apple TV will feature Apple’s A12 processor and not the A12X. It’s unclear if this spec update is headed to the standard Apple TV or the Apple TV 4K, but it’s likely Apple has plans to discontinue the base Apple TV model in favour of one UHD-capable set-top box with an improved processor.

The first 4th-generation Apple TV was released back in 2015 followed by the Apple TV 4K in 2017.

Given Apple is expected to reveal release dates for its Apple Arcade game subscription service and Apple TV+, the company’s long-awaited TV streaming platform, next week, it makes sense that a new, more powerful Apple TV could be on the way as well.

Source: @never_released Via: MacRumors

The post Apple’s ‘one more thing’ moment could be a new A12 chip-powered Apple TV appeared first on MobileSyrup.

05 Sep 14:19

This father-daughter duo travelled from Illinois for first day at U of T — on a tandem bike

mkalus shared this story .

Carlin Henikoff has a lot to get used to this week — a new country, new city, a brand new school, and the soreness that comes with cycling almost 900 kilometres in just under a week.

The new University of Toronto student took an unusual route campus. While some students might drive or fly, the 18-year-old made the trip with her dad, Troy, on a tandem bike.

The pair left the city of Evanston, Illinois, which is just north of Chicago, on Aug. 25, and got to Toronto last Saturday.

"It was a challenge, for sure," Henikoff told CBC News.

"There were some bumps in the road, but they were relatively minor."

Henikoff says she grew up in a very "bike-centric" family, where there were twice as many miles on their bikes as on the family car.

It was a family friend who suggested making the trip, and after mulling it over and doing some research, the pair decided to go for it. Their path took them through Milwaukee, where they took a ferry to Michigan before hitting Ontario.  

Riding a tandem bike is an extra challenge, Henikoff said. There are two sets of pedals linked by a chain, and the person at the front of the bike steers. Both riders have to work together to balance the bike and set its speed.

This kind of riding tests communication, teamwork and compromise, she said.

"My dad says getting on a tandem bike with someone will accelerate their relationship — however it might be going," Henikoff said with a smile.

Fortunately, there were no major issues for the two.

"We had a goal, we set the goal, and we worked together to achieve it," she said.

Henikoff says she was drawn to U of T for the ability to study both arts and sciences. She's planning to study cognitive science, as well as the history and philosophy of science and technology.

She also just joined the university's mountain bike team.

"Toronto is just such a cool place culturally," she said.

Share Audio

Playback Status: ready
Assigned test group: None
Identifier: mediaId 1597945923710
Asset: Undetermined
Bitrate: Undetermined
Streaming URL:
Events Log:

Here and Now Toronto

A first year U of T student rode to school from Chicago to Toronto for move-in day

Starting your first year of university is challenging enough, but one new U of T student decided to take a particularly intense road to campus. She cycled to university all the way from a town near Chicago. And get this: she rode on a tandem bike with her dad. She joined us in studio to talk about the week-long journey. 6:49

adam.carter@cbc.ca

05 Sep 14:19

Twitter Favorites: [anildash] I should get a special internet award for being the only person who’s been on social media for 20 years without eve… https://t.co/ApLmPd7DH5

Anil Dash 🥭 @anildash
I should get a special internet award for being the only person who’s been on social media for 20 years without eve… twitter.com/i/web/status/1…
05 Sep 14:19

Review: Porsche’s all-electric Taycan is poised to give Tesla a jolt - The Globe and Mail

mkalus shared this story .

Let’s get the hyperbole out of the way right from the get-go: The 2020 Porsche Taycan is one of the most important production cars of the 21st century. The reason? This forthcoming all-electric sedan could deliver equal or better performance than the Tesla Model S – and it’s a Porsche.

That’s an important distinction.

Since the introduction of the Model S in 2012, “traditional” automakers have struggled to understand the feverish, unwavering and, at times, seemingly irrational support for Tesla from early adopters, investors and technophiles alike. Without question, Tesla has been a disruptive force in the industry, jump-starting the modern-day battery electric vehicle (BEV) movement. To this point, though, the companyʼs ability to disrupt has hinged on a few critical factors.

Among the most significant: No BEVs compete directly with the three Tesla models – at least, not in a head-to-head, dollar-for-dollar, kilowatt-hour-to-kilowatt-hour type of way. But with the forthcoming Porsche Taycan, billed internally as “the world’s first fully electric sports car,” the Tesla Model S may have a direct competitor – and it may have met its match.

When the first BEV from Porsche hits the market, there will be two models to choose from: the Taycan Turbo and the Taycan Turbo S. (For clarification, neither model features a turbocharged engine. The nomenclature is intended to mirror that used for the Porsche 911 model line.) The difference between the two comes down to the output of the electric motor mounted on the front axle.

Both the Turbo and Turbo S feature the same 93.4 kWh battery pack, a network of lithium-ion cells mounted in the floor to ensure a low centre of gravity. Both also have electric motors at the front and rear axles, giving the Porsche all-electric, all-wheel drive. Nothing revolutionary so far, but let’s dig deeper into the technical details.

The Taycan has been engineered for “repeatable performance” – compared with the Tesla Model S P100D, it’s slightly slower from a dead start and has less overall range. However, it should have far more consistent performance.

In its famed “ludicrous mode,” the Tesla can hit 100 kilometres an hour in a revelatory 2.5 seconds, but it would be unlikely to match this time immediately thereafter owing to battery-cooling challenges. The engineers at Porsche have spent the better part of four years tackling these precise challenges.

First, they opted for a permanent magnet synchronous motor (PMSM) for both axles. This design is more expensive than other types of motors, but it’s more compact, more efficient and better able to manage heat. Next, the engineers linked a two-speed transmission to the electric motor at the back, which allows for maximum acceleration in first gear, and a higher top speed and longer range in second gear. (The front motor uses a single-speed transmission.)

The Taycan also features 800-volt technology and pulse-controlled inverters for both motors to manage the juice from the battery pack. (The Turbo S employs a larger inverter for the front axle motor, which allows for the increase in power and performance.) Lastly, there’s an intricate thermal-management system that’s designed to keep the battery within the optimal temperature range when driving as well as when the Taycan is plugged in and recharging.

Story continues below advertisement

But enough with the words – let’s get to the numbers.

The Taycan Turbo produces 616 horsepower and 627 lb.-ft of torque, all of that available from a standstill. With the overboost function of the electrified powertrain, the horsepower jumps to 670 for brief spells. The claimed zero-to-100-km/h time is 3.2 seconds and top speed rings in at 260 km/h. For the Turbo S, the numbers are more eye-popping: 750 horsepower when in overboost, 774 lb.-ft of torque, the same top speed and the zero-to-100 km/h sprint in 2.8 seconds.

At the end of the technical presentation, attendees had the chance to ride shotgun in the Taycan Turbo S around the Porsche Experience Center, a test track facility in Atlanta. Based on this run, my belief is that Porsche is being conservative with the acceleration figures – my gut, which is still in the back seat of the car somewhere, says a time of 2.5 seconds is closer to the truth.

As evidence that the objective of repeatable performance has been achieved, the Porsche team put up some even more impressive numbers during development. There’s the new lap record for BEVs around the Nordschleife race course in Germany, which now stands at 7 minutes 42 seconds – a fast time made more newsworthy when you consider there’s been no official attempt made with a Model S. The Taycan also completed 26 consecutive sprints from zero to 200 km/h on an airfield in Germany. The news here: The last run was only 0.8 seconds slower than the first.

Finally, there was an endurance test that took place at the Nardo Technical Center in Italy. Over the course of 24 hours, the Taycan completed 3,425 kilometres at an average speed of 143 km/h. Of note: This run included recharging sessions – and this is another area where Porsche looks to excel. When hooked up to a high-speed charger, the Taycan is expected to vault from a 5-per-cent charge to an 80-per-cent charge in just 22.5 minutes.

In terms of range, the Taycan may not be able to match the Model S. The Turbo offers up to 450 km in range according to testing standards in the European Union or roughly 400 km in U.S. Environmental Protection Agency terms. But the goal for Porsche has not been to match Tesla, measure for measure. The goal has been to engineer a BEV that drives like a Porsche.

So there are only two big question marks left: how the 2020 Porsche Taycan drives and whether prospective customers will pay $173,900 (for the Turbo) and $213,900 (for the Turbo S) for the privilege to own one.

The writer was a guest of the automaker. Content was not subject to approval.

05 Sep 14:19

Given his track record with employers, constituents, wives, mistresses & party leaders, I honestly thought Johnson would be more adept at lying to the country.

by mrjamesob
mkalus shared this story from mrjamesob on Twitter.

Given his track record with employers, constituents, wives, mistresses & party leaders, I honestly thought Johnson would be more adept at lying to the country.




9853 likes, 2027 retweets
05 Sep 14:18

On the importance of taking notes (for students) - satis

Here's the URL without the tacked-on tracking info:

https://qz.com/1701631/how-to-take-better-notes/

Wish it had something to do with outliners though!
05 Sep 14:18

Firstnet – Some Technical Details

by Martin

When I was in the US recently, I noticed that AT&T now broadcasts 2 Mobile Country Codes / Mobile Network codes from their LTE base stations in the places in Illinois and Ohio I checked. One is their own, MCC/MNC 310/410, and the other is MCC/MNC 313/100. A quick search revealed that this is the code assigned to Firstnet, a network for emergency services and first responders such as police, fire departments, ambulances and other public functions that require high priority access in congestion situations. Wikipedia has an article about Firstnet here. Up to now I always thought that the US wanted to establish a separate network but it looks like I was either wrong or that they have changed their mind over the years.

Another clue I should have noticed straight away is that SIB5 in the places I checked had band 14 in its list of bands which is in the 700 MHz band and has been allocated the lowest possible priority of 1. In other words, devices only camp on this band if there is nothing else around. Makes sense for a low band, you don’t want devices to camp there while channels on higher bands are available as well. However, band 14 is special, as its bandwidth of 10 MHz (or 20 MHz if you count the US way) has been set aside for a first responder network. By today’s standards, 10 MHz is not much anymore but especially in the lower bands between 600-900 MHz, broader channels are usually not available.

But as I said before, Firstnet is not a separate radio network. Instead, it piggy-backs on AT&Ts existing LTE radio network by using 3GPP’s Multi-Operator Core Network (MOCN) features to offer voice and data services to first responders. I will leave the financial and political issues aside in this article and only focus on the technical aspects. However, if you care to find out more about those, have a look here.

According to this technical description (see SD-1.2), Firstnet subscribers can not only use band 14, but all bands deployed at a cell site. AT&T subscribers in turn also have access to band 14 spectrum as part of Carrier Aggregation (CA) or also as a single channel in case nothing else is found. Firstnet subscribers, however, have precedence over AT&T subscribers. And inside Firstnet, some subscribers can have a higher priority than others. One document linked to below has a description which 3GPP features are used to ensure Firstnet users can preempt traffic in congested areas so I won’t go into the details here.

As per the MOCN approach, FirstNet shares the RAN with AT&T subscribers but uses a separate LTE core network which seems to managed by AT&T a well. Part of this core network seems to be an IMS VoLTE network including ePDGs for VoWifi access. One feature any network for first responders must have is being able to make group calls to replace Walkie Talkies. At the moment, Firstnet states that they have deployed a proprietary Push To Talk solution they refer to as EPTT but are in the process of installing two independent 3GPP compliant Mission Critical Push to Talk (MCPTT) IMS based services. These are expected to be available towards the end of 2019. Tightly integrating them into devices is going to be a challenge unless of course, it’s done directly by the device manufacturers. Time will tell. Also, without specialized hardware like external microphones and firefighter grade speakers so they can leave the device on their belt while talking or waiting for a response will be a make or break usability requirement.

Another thing a network stands and falls with are the devices that are supported. As Firstnet subscribers can use all of AT&T bands, no special device is required in theory. In practice a device has to support FirstNet’s VoLTE parameters, band 14 should be supported for rural coverage (which seems to be optional), and 3GPP’s High Priority Access (HPA) features for preemption in congestion situations. Fortunately, a lot of Samsung and LG smartphones declare support already and recent iPhones do as well. Done deal I would say. Here’s a link to a recent device list to give you an impression.

There are a number of features to ensure preemption and priority in congestion situations and there is what is referred to as an ‘uplift’ mechanism (see SD-3.2.1.3) to increase the priority of a subscriber or a group of subscribers for a minimum of 1 hour to a maximum of 24 hours. Again, I didn’t have a closer look yet as to which 3GPP features this mechanism interacts with but support for this feature seems a good idea.

There we go, this is the result of my first high level collection of technical information that nobody on the net seems to have done so far. If you have further technical details that you think are useful to a more technically oriented readership, please consider leaving a comment below. And for some further information here’s a link to an article on Fiercewireless about the status of the network in summer 2019.

05 Sep 14:17

On the Many NetNewsWire Feature Requests to Show Full Web Pages

A number of people have asked that NetNewsWire show the full web page — right there, in the app — after clicking a link.

The idea is pretty good! It solves two big problems:

  • You get full content, which is great when a feed contains only summaries or truncated articles
  • You don’t have to switch to another app: you can stay right where you are

You’d think it’s a no-brainer, and we should just go ahead. But there are other considerations.

One big one is that your ad blockers and privacy extensions won’t run. They work in Safari, but they do not extend to other apps that use WebKit. This means that viewing a web page in NetNewsWire would be less secure and more annoying than viewing the same page in Safari (or whatever your browser is).

This points to one of my design principles: the app should have boundaries. Some features belong in the app, and some features are best left to apps that do that feature way better than NetNewsWire could. One of those things is showing web pages — that’s really a web browser feature.

Having boundaries means we can concentrate on doing a great job at the things that do belong in the app.

(Before you mention SFSafariViewController, recall that it’s iOS-only.)

What about the glory days?

“But Brent! In NetNewsWire 2.0 you added a tabbed browser to NetNewsWire, and it was awesome and a hugely popular feature!”

It was! But times have changed. Many websites are hostile these days. In 2005, this feature was fine — but these days it’s totally not.

A winged messenger arrives with a solution

There is a solution to the problem of showing full content and not leaving the app, and it’s a feature that really does belong in an RSS reader: using content extraction to grab the article from the original page.

If you’ve ever used Safari’s Reader view, then you know what I’m talking about. The idea is that NetNewsWire would do something very much like the Reader view (but inline, in the article pane), that grabs the content and formats it nicely, without all the extra junk that is not the article you want to read.

There are a number of open source options for this. We’re looking at using Feedbin’s content extraction service (which wouldn’t require you to have a Feedbin account).

The generous folks at Feedbin are running a copy of the open-source Mercury Parser, and they’ve offered to open this service up to RSS readers like NetNewsWire. (Reeder uses it already, for instance.)

When?

Right now we’re working on NetNewsWire 5.0.1, which is (almost entirely) a bug-fix release. I don’t know what’s going to be in 5.1 yet — we’re still digesting all the feedback, looking at our original roadmap, and thinking about things.

We’re also working on NetNewsWire for iOS! We’re busy.

But this is definitely the kind of feature that should come sooner rather than later.

05 Sep 14:16

Der Economist über Free Speech.Meanwhile, in mature ...

mkalus shared this story from Fefes Blog.

Der Economist über Free Speech.
Meanwhile, in mature democracies, support for free speech is ebbing, especially among the young, and outright hostility to it is growing. Nowhere is this more striking than in universities in the United States. In a Gallup poll published last year, 61% of American students said that their campus climate prevented people from saying what they believe, up from 54% the previous year. Other data from the same poll may explain why. Fully 37% said it was “acceptable” to shout down speakers they disapproved of to prevent them from being heard, and an incredible 10% approved of using violence to silence them.

Many students justify this by arguing that some speakers are racist, homophobic or hostile to other disadvantaged groups. This is sometimes true. But the targets of campus outrage have often been reputable, serious thinkers. Heather Mac Donald, for example, who argues that “Black Lives Matter” protests prompted police to pull back from high-crime neighbourhoods, and that this allowed the murder rate to spike, had to be evacuated from Claremont McKenna College in California in a police car. Furious protesters argued that letting her speak was an act of “violence” that denied “the right of black people to exist”.

Ich fühle mich spontan an mehrere Scifi-Dystopien erinnert.
05 Sep 14:16

Why Teens Are Creating Their Own News Outlets

Rainesford Stauffer, Teen Vogue, Sept 05, 2019
Icon

When I created my own newspaper in Grade 5, content was limited to what I read and saw around town, the technology of the day was a typewriter and a mimeograph machine, and my reach was limited to where I could carry copies. By contrast, theCramm gathers from news sources around the world, uses Instagram and Instant Messaging, and can reach readers anywhere there's an internet connection. I graduated into a world of zines and underground pubs, newspapers, websites and, eventually, this newsletter. I wonder what 15-year old Olivia Seltze will graduate into.

Web: [Direct Link] [This Post]
05 Sep 14:16

Our journey to type checking 4 million lines of Python

by Jukka Lehtosalo

Dropbox is a big user of Python. It’s our most widely used language both for backend services and the desktop client app (we are also heavy users of Go, TypeScript, and Rust). At our scale—millions of lines of Python—the dynamic typing in Python made code needlessly hard to understand and started to seriously impact productivity. To mitigate this, we have been gradually migrating our code to static type checking using mypy, likely the most popular standalone type checker for Python. (Mypy is an open source project, and the core team is employed by Dropbox.)

Dropbox has been one of the first companies to adopt Python static type checking at this scale. These days thousands of projects use mypy, and things are quite battle tested. It has been a long journey for us to get to this point, and there were a bunch of false starts and failed experiments along the way. This post tells the story of Python static checking at Dropbox, from the humble beginnings as part of my academic research project, to the present day, when type checking and type hinting is a normal thing for numerous developers across the Python community. It is supported by a wide variety of tools such as IDEs and code analyzers.

Why type checking?

If you have only ever used dynamically typed Python, you might wonder about all the fuss about static typing and mypy. You may even enjoy Python because it has dynamic typing, and the whole thing may be a bit baffling. The key to static type checking is scale: the larger your project, the more you want (and eventually need) it.

Once your project is tens of thousands of lines of code, and several engineers work on it, our experience tells us that understanding code becomes the key to maintaining developer productivity. Without type annotations, basic reasoning such as figuring out the valid arguments to a function, or the possible return value types, becomes a hard problem. Here are typical questions that are often tricky to answer without type annotations:

  • Can this function return None?
  • What is this items argument supposed to be?
  • What is the type of the id attribute: is it int, str, or perhaps some custom type?
  • Does this argument need to be a list, or can I give a tuple or a set?

Looking at this fragment with type annotations, all of these questions are trivial to answer:

class Resource:
    id: bytes
    ...
    def read_metadata(self, 
                      items: Sequence[str]) -> Dict[str, MetadataItem]:
        ...
  • read_metadata does not return None, since the return type is not Optional[…].
  • The items argument is a sequence of strings. It can’t be an arbitrary iterable.
  • The id attribute is a byte string.

In a perfect world, you could expect these to be documented in a docstring, but experience overwhelmingly says that this is often not the case. Even if there is documentation, you can’t rely on it being accurate. Even if there is a docstring, it’s often ambiguous or imprecise, leaving a lot of room for misunderstandings. This problem may become critical for large teams or codebases:
Although Python is really good at early and middle stages of a project, at a certain point successful projects and companies that use Python may face a critical decision: “should we rewrite everything in a statically typed language?”

A type checker like mypy solves this problem by providing a formal language for describing types, and by validating that the provided types match the implementation (and optionally that they exist). In essence, it provides verified documentation.

There are other benefits as well, and these are not trivial either:

  • A type checker will find many subtle (and not so subtle) bugs. A typical example is forgetting to handle a None value or some other special condition.
  • Refactoring is much easier, as the type checker will often tell exactly what code needs to be changed. We don’t need to hope for 100% test coverage, which is usually impractical anyway. We don’t need to study deep stack traces to understand what went wrong.
  • Even in a large project, mypy can often perform a full type check in a fraction of a second. Running tests often takes tens of seconds, or minutes. Type checking provides quick feedback and allows us to iterate faster. We don’t need to write fragile, hard-to-maintain unit tests that mock and patch the world to get quick feedback.
  • IDEs and editors such as PyCharm and Visual Studio Code take advantage of type annotations to provide code completion, to highlight errors, and to support better go to definition functionality—and these are just some of the helpful features types enable. For some programmers, this is the biggest and quickest win. This use case doesn’t require a separate type checker tool such as mypy, though mypy helps keep the annotations in sync with the code.

Prehistory of mypy

The story of mypy begins in Cambridge, UK, several years before I joined Dropbox. I was looking at somehow unifying statically typed and dynamic languages as part of my PhD research. Inspired by work such as Siek and Taha’s gradual typing and Typed Racket, I was trying to find ways to make it possible to use the same programming language for projects ranging from tiny scripts to multi-million line sprawling codebases, without compromising too much at any point in the continuum. An important part of this was the idea of gradual growth from an untyped prototype to a battle-tested, statically typed product. To a large extent, these ideas are now taken for granted, but it was an active research problem back in 2010.

My initial work on type checking didn’t target Python. Instead I used a home-grown, small language called Alore. Here is an example to give you an idea of what it looked like (the type annotations are optional):

def Fib(n as Int) as Int
  if n <= 1
    return n
  else
    return Fib(n - 1) + Fib(n - 2)
  end
end

Using a simplified, custom language is a common research approach, not least since it makes it quick to perform experiments, and various concerns not essential for research can be conveniently ignored. Production-quality languages tend to be large and have complicated implementations, making experimentation slow. However, any results based on a non-mainstream language are a bit suspect, since practicality may have been sacrificed along the way.

My type checker for Alore looked pretty promising, but I wanted to validate it by running experiments with real-world code, which didn’t quite exist for Alore. Luckily, Alore was heavily inspired by Python. It was easy enough to modify the checker to target Python syntax and semantics, making it possible to try type checking open source Python code. I also wrote a source-to-source translator from Alore to Python, and used it to translate the type checker. Now I had a type checker, written in Python, that supported a Python subset! (Certain design decisions that made sense for Alore were a poor fit for Python, which is still visible in parts of the mypy codebase.)

Actually, the language wasn’t quite Python at that point: it was a Python variant, because of certain limitations of the Python 3 type annotation syntax. It looked like a mixture of Java and Python:

int fib(int n):
    if n <= 1:
        return n
    else:
        return fib(n - 1) + fib(n - 2)

One of my ideas at the time was to also use type annotations to improve performance, by compiling the Python variant to C, or perhaps JVM bytecode. I got as far as building a prototype compiler, but I gave up on that, since type checking seemed useful enough by itself.

I eventually presented my project at the PyCon 2013 conference in Santa Clara, and I chatted about it with Guido van Rossum, the BDFL of Python. He convinced me to drop the custom syntax and stick to straight Python 3 syntax. Python 3 supports function annotations, so the example could be written like this, as a valid Python program:

def fib(n: int) -> int:
    if n <= 1:
        return n
    else:
        return fib(n - 1) + fib(n - 2)

Some compromises were necessary (this is why I invented my own syntax in the first place). In particular, Python 3.3, the latest at that point, didn’t have variable annotations. I chatted about various syntax possibilities over email with Guido. We decided to use type comments for variables, which does the job, but is a bit clunky (Python 3.6 gave us a much nicer syntax):

products = []  # type: List[str]  # Eww

Type comments were also handy for Python 2 support, which has no built-in notion of type annotations:

def fib(n):
    # type: (int) -> int
    if n <= 1:
        return n
    else:
        return fib(n - 1) + fib(n - 2)

It turned out that these (and other) compromises didn’t really matter too much—the benefits of static typing made users quickly forget the not-quite-ideal syntax. Since type checked Python now had no special syntax, existing Python tools and workflows continued to work, which made adoption much easier.

Guido also convinced me to join Dropbox after finishing my PhD, and there begins the core of this story.

Making types official (PEP 484)

We did the first serious experiments with mypy at Dropbox during Hack Week 2014. Hack Week is a Dropbox institution—a week when you can work on anything you want! Some of the most famous Dropbox engineering projects can trace their history back to a Hack Week. Our take-away was that using mypy looked promising, though it wasn’t quite ready for wider adoption yet.

An idea was floated around that time to standardize the type hinting syntax in Python. As I mentioned above, starting from Python 3.0, it has been possible to write function type annotations in Python, but they were just arbitrary expressions, with no designated syntax or semantics. They are mostly ignored at runtime. After Hack Week, we started work on standardizing the semantics, and it eventually resulted in PEP 484 (co-written by Guido, Łukasz Langa, and myself). The motivation was twofold. First, we hoped that the entire Python ecosystem would embrace a common approach for type hinting (Python term for type annotations), instead of risking multiple, mutually incompatible approaches. Second, we wanted to openly discuss how to do type hinting with the wider Python community, in part to avoid being branded heretics. As a dynamic language that is famous for “duck typing”, there was certainly some initial suspicion about static typing in the community, but it eventually subsided when it became clear that it’s going to stay optional (and after people understood that it’s actually useful).

The eventually accepted type hinting syntax was quite similar to what mypy supported at the time. PEP 484 shipped with Python 3.5 in 2015, and Python was no longer (just) a dynamic language. I like to think of this as a big milestone for Python.

The migration begins

We set up a 3-person team at Dropbox to work on mypy in late 2015, which included Guido, Greg Price, and David Fisher. From there on, things started moving pretty rapidly. An immediate obstacle to growing mypy use was performance. As I implied above, an early goal was to compile the mypy implementation to C, but this idea was scrapped (for now). We were stuck with running on the CPython interpreter, which is not very fast for tools like mypy. (PyPy, an alternative Python implementation with a JIT compiler, also didn’t help.)

Luckily, there were algorithmic improvements to be had. The first big major speedup we implemented was incremental checking. The idea is simple: if all dependencies of a module are unchanged from the previous mypy run, we can use data cached from the previous run for the dependencies, and we only need to type check modified files and their dependencies. Mypy goes a bit further than that: if the external interface of a module hasn’t changed, mypy knows that other modules that import the module don’t need to be re-checked.

Incremental checking really helps when annotating existing code in bulk, as this typically involves numerous iterative mypy runs, as types are gradually inserted and refined. The initial mypy run would still be pretty slow, since many dependencies would need to be processed. To help with that, we implemented remote caching. If mypy detects that your local cache is likely to be out of date, mypy downloads a recent cache snapshot for the whole codebase from a centralized repository. It then performs an incremental build on top of the downloaded cache. This gave another nice performance bump.

This was a period of quick organic adoption at Dropbox. By the end of 2016, we were at about 420,000 lines of type-annotated Python. Many users were enthusiastic about type checking. The use of mypy was spreading quickly across teams at Dropbox.

Things were looking good, but there was still a lot of work to be done. We started running periodic internal user surveys to find pain points and to figure out what work to prioritize (a habit that continues to this day). Two requests were clearly at the top: more type checking coverage, and faster mypy runs. Clearly our performance and adoption growth work was not yet done. We doubled down on these tasks.

More performance!

Incremental builds made mypy faster, but it still wasn’t quite fast. Many incremental runs took about a minute. The cause is perhaps not surprising to anybody who has worked on a large Python codebase: cyclic imports. We had sets of hundreds of modules that each indirectly import each other. If any file in an import cycle got changed, mypy would have to process all the files in the cycle, and often also any modules that imported modules from this cycle. One of these cycles was the infamous “tangle” that has caused much grief at Dropbox. At one point it contained several hundred modules, and many tests and product features imported it, directly or indirectly.

We looked at breaking the tangled dependencies, but we didn’t have the resources to do that. There was just too much code we weren’t familiar with. We came up with an alternative approach—we were going to make mypy fast even in the presence of tangles. We achieved this through the mypy daemon. The daemon is a server process that does two interesting things. First, it keeps information about the whole codebase in memory, so that each mypy run doesn’t need to load cache data corresponding to thousands of import dependencies. Second, it tracks fine-grained dependencies between functions and other constructs. For example, if function foo calls function bar, there is a dependency from bar to foo. When a file gets changed, the daemon first processes just the changed file in isolation. It then looks for externally visible changes in that file, such as a changed function signature. The daemon uses the fine-grained dependencies to only recheck those functions that actually use the changed function. Usually this is a small number of functions.

Implementing all this was a challenge, since the original implementation was heavily geared towards processing things a file at a time. We had to deal with numerous edge cases around what needs to be reprocessed when various thing change, such as when a class gets a new base class. After a lot of painstaking work and sweating the details, we were able to get most incremental runs down to a few seconds, which felt like a great victory.

Even more performance!

Together with remote caching that I discussed above, mypy daemon pretty much solved the incremental use case, where an engineer iterates on changes to a small number of files. However, worst-case performance was still far from optimal. Doing a clean mypy build would take over 15 minutes, which was much slower than we were happy with. This was getting worse every week, as engineers kept writing new code and adding type annotations to existing code. Our users were still hungry for more performance, and we were happy to comply.

We decided to get back to one of the early ideas behind mypy—compiling Python to C. Experimenting with Cython (an existing Python to C compiler) didn’t give any visible speed-up, so we decided to revive the idea of writing our own compiler. Since the mypy codebase (which is written in Python) was already fully type annotated, it seemed worth trying to use these type annotations to speed things up. I implemented a quick proof-of-concept prototype that gave performance improvement of over 10x in various micro-benchmarks. The idea was to compile Python modules to CPython C extension modules, and to turn type annotations into runtime type checks (normally type annotations are ignored at runtime and only used by type checkers). We effectively were planning to migrate the mypy implementation from Python to a bona fide statically typed language, which just happens to look (and mostly behave) exactly like Python. (This sort of cross-language migration was becoming a habit—the mypy implementation was originally written in Alore, and later a custom Java/Python syntax hybrid.)

Targeting the CPython extension API was key to keeping the scope of the project manageable. We didn’t need to implement a VM or any libraries needed by mypy. Also, all of the Python ecosystem and tools (such as pytest) would still be available for us, and we could continue to use interpreted Python during development, allowing a very fast edit-test cycle without having to wait for compiles. This sounded like both having your cake and eating it, which we quite liked!

The compiler, which we called mypyc (since it uses mypy as the front end to perform type analysis), was very successful. Overall we achieved around 4x speedup for clean mypy runs with no caching. The core of the mypyc project took about 4 calendar months with a small team, which included Michael Sullivan, Ivan Levkivskyi, Hugh Han, and myself. This was much less work than what it would have taken to rewrite mypy in C++ or Go, for example, and much less disruptive. We also hope to make mypyc eventually available for Dropbox engineers for compiling and speeding up their code.

There was some interesting performance engineering involved in reaching this level of performance. The compiler can speed up many operations by using fast, low-level C constructs. For example, calling a compiled function gets translated into a C function call, which is a lot faster than an interpreted function call. Some operations, such as dictionary lookups, still fall back to general CPython C API calls, which are only marginally faster when compiled. We can get rid of the interpretation overhead, but that only gives a minor speed win for these operations.

We did some profiling to find the most common of these “slow operations”. Armed with this data, we tried to either tweak mypyc to generate faster C code for these operations, or to rewrite the relevant Python code using faster operations (and sometimes there was nothing we could easily do). The latter was often much easier than implementing the same transformation automatically in the compiler. Longer term we’d like to automate many of these transformations, but at this point we were focused on making mypy faster with minimal effort, and at times we cut a few corners.

Reaching 4 million lines

Another important challenge (and the second most popular request in mypy user surveys) was increasing type checking coverage at Dropbox. We tried several approaches to get there: from organic growth, to focused manual efforts of the mypy team, to static and dynamic automated type inference. In the end, it looks like there is no simple winning strategy here, but we were able to reach fast annotation growth in our codebases by combining many approaches.

As a result, our annotated line count in the biggest Python repository (for back-end code) grew to almost 4 million lines of statically typed code in about three years. Mypy now supports various kinds of coverage reports that makes it easy to track our progress. In particular, we report sources of type imprecision, such as using explicit, unchecked Any types in annotations, or importing 3rd party libraries that that don’t have type annotations. As part of our effort to improve type checking precision at Dropbox, we also contributed improved type definitions (a.k.a. stub files) for some popular open-source libraries to the centralized Python typeshed repository.

We implemented (and standardized in subsequent PEPs) new type system features that enable more precise types for certain idiomatic Python patterns. A notable example is TypedDict, which provides types for JSON-like dictionaries that have a fixed set of string keys, each with a distinct value type. We will continue to extend the type system, and improving support for the Python numeric stack is one of the likely next steps.

Here are highlights of the things we’ve done to increase annotation coverage at Dropbox:

Strictness. We gradually increased strictness requirements for new code. We started with advice from linters asking to write annotations in files that already had some. We now require type annotations in new Python files and most existing files.

Coverage reporting. We send weekly email reports to teams highlighting their annotation coverage and suggesting the highest-value things to annotate.

Outreach. We gave talks about mypy and chatted with teams to help them get started.

Surveys. We run periodic user surveys to find the top pain points and we go to great lengths to address them (as far as inventing a new language to make mypy faster!).

Performance. We improved mypy performance through mypy daemon and mypyc (p75 got 44x faster!) to reduce friction in annotation workflows and to allow scaling the size of the type checked codebase.

Editor integrations. We provided integrations for running mypy for editors popular at Dropbox, including PyCharm, Vim, and VS Code. These make it much easier to iterate on annotations, which happens a lot when annotating legacy code.

Static analysis. We wrote a tool to infer signatures of functions using static analysis. It can only deal with sufficiently simple cases, but it helped us increase coverage without too much effort.

Third party library support. A lot of our code uses SQLAlchemy, which uses dynamic Python features that PEP 484 types can’t directly model. We made a PEP 561 stub file package and wrote a mypy plugin to better support it (it’s available as open source).

Challenges along the way

Getting to 4M lines wasn’t always easy and we had a few bumps and made some mistakes along the way. Here are some that will hopefully prevent a few others from making the same mistakes.

Missing files. We started with only a small number of files in the mypy build. Everything outside the build was not checked. Files were implicitly added to the build when the first annotations were added. If you imported anything from a module outside the build, you’d get values with the Any type, which are not checked at all. This resulted in a major loss of typing precision, especially early in the migration. This still worked surprisingly well, though it was a typical experience that adding a file to the build exposed issues in other parts of the codebase. In the worst case, two isolated islands of type checked code were being merged, and it turned out that the types weren’t compatible between the two islands, necessitating numerous changes to annotations! In retrospect, we should have added basic library modules to the mypy build much earlier to make things smoother.

Annotating legacy code. When we started, we had over 4 million lines of existing Python code. It was clear that annotating all of that would be non-trivial. We implemented a tool called PyAnnotate that can collect types at runtime when running tests and insert type annotations based on these types—but it didn’t see much adoption. Collecting the types was slow, and generated types often required a lot of manual polish. We thought about running it automatically on every test build and/or collecting types from a small fraction of live network requests, but decided against it as either approach is too risky.

In the end, most of the code was manually annotated by code owners. We provide reports of highest-value modules and functions to annotate to streamline the process. A library module that is used in hundreds of places is important to annotate; a legacy service that is being replaced much less so. We are also experimenting with using static analysis to generate type annotations for legacy code.

Import cycles. Previously I mentioned that import cycles (the “tangle”) made it hard to make mypy fast. We also had to work hard to make mypy support all kinds of idioms arising from import cycles. We recently finished a major redesign project that finally fixes most import cycle issues. The issues actually stem from the very early days of Alore, the research language mypy originally targeted. Alore had syntax that made dealing with import cycles easy, and we inherited some limitations from the simple-minded implementation (that was just fine for Alore). Python makes dealing with import cycles not easy, mainly because statements can mean multiple things. An assignment might actually define a type alias, for example, and mypy can’t always detect that until most of an import cycle has been processed. Alore did not have this kind of ambiguity. Early design decisions can cause you pain still many years later!

To 5 million lines and beyond

It has been a long journey from the early prototypes to type checking 4 million lines in production. Along the way we’ve standardized type hinting in Python, and there is now a burgeoning ecosystem around Python type checking, with IDE and editor support for type hints, multiple type checkers with different tradeoffs, and library support.

Even though type checking is already taken for granted at Dropbox, I believe that we are still in early days of Python type checking in the community, and things will continue to grow and get better. If you aren’t using type checking in your large-scale Python project, now is a good time to get started—nobody who has made the jump I’ve talked to has regretted it. It really makes Python a much better language for large projects.

Are you interested in working on Developer Infrastructure at scale? We’re hiring!

05 Sep 14:15

Bird’s-eye view of D3.js

by Nathan Yau

D3.js can do a lot of things, which provides valuable flexibility to construct the visualization that you want. However, that flexibility can also intimidate newcomers. Amelia Wattenberger provides a bird’s-eye view of the library to help make it easier to get started and gain a better understanding of what the library can do. Even if you’re already familiar with D3.js, it can serve as a useful reference.

Tags: Amelia Wattenberger, d3js, overview