Shared posts

05 Sep 14:52

Why teach computer science?

by peter@rukavina.net (Peter Rukavina)

Linda Liukas writes about why we should teach computer science to young people:

Teaching computer science in primary school is not only about coding. It’s about developing a love of learning and offering widely-applicable, long-term ideas. A way of thinking that provides a new perspective to the world. And that’s what computer science does.

We shouldn’t teach computer science only because it’s useful, but because it’s interesting and intensely creative. Computer science blends intellectual pleasure of reason and logic with the practicality of engineering. It blends the beauty of arts with the change-the-world ethos of social sciences.

Computer science as a vector for active citizenship is a much more interesting notion than computer science as an economic development tool.

05 Sep 14:52

Union Station

by David A. Banks

Why do socialists love trains? Commentators all across the political spectrum agree that they do: Conservative pundit George Will, writing in Newsweek in 2011, posited that “the real reason for progressives’ passion for trains is their goal of diminishing Americans’ individualism in order to make them more amenable to collectivism.” Earlier this year, Nathan J. Robinson wrote in Current Affairs, “Cars are the freedom to be lonely and stuck in traffic. Trains are the freedom from having to maintain your own personal transportation container.” Christopher Kempf writes in Jacobin, as if it were a self-evident fact, that: “As with healthcare and education, the broad mass of people would benefit enormously from a publicly run rail system that delivered an efficient, affordable alternative to travel by plane or car.” The answer, then, seems obvious: trains are machines that symbolically and literally impose collective will and action.

In an interview published in Truthout, Noam Chomsky argues that market forces don’t deliver mass transit because such systems do not offer “choices that involve common effort and solidarity and mutual support and concern for others.” However, nearly every railroad in the United States, whether connecting neighborhoods within cities or cities to each other, was built by private, profit-seeking companies. It was only in 1970 that Amtrak consolidated dozens of private passenger rail systems; many cities’ urban rail lines and bus systems were bought by their respective local governments nine years prior as part of the Kennedy Administration’s Housing Act of 1961. Today, with the exception of South Florida’s Brightline, passenger rail is publicly owned and everyone argues for and against trains as if that has always been the case; as if collective ownership is baked into the technology itself. One look at mid-19th century history shows that this simply isn’t true.

Asking why trains are the beloved technology of today’s leftists teaches us how to make future demands against companies like Amazon

The cultural valence of the technology further complicates matters. Depending on who you are and where you live, the train is associated with the poor or the rich: the working people taking overcrowded and unreliable commuter trains, versus the Acela set shuttling between Manhattan and DC condominiums. Land markets being what they are, proximity to well-maintained public transit can command a premium and cause what Casey Dawkins and Rolf Moeckel call “Transit-Induced Gentrification.” At the same time, many small towns rely on Amtrak as their last inter-city transportation option, and general Amtrak ridership has grown year-over-year for about a decade.

To be clear, we need publicly owned rail. It could, quite literally, save the planet. The environmental benefits of relying on trains instead of planes and cars are overwhelming, not only when compared mile-for-mile and energy-per-passenger, but because of the cumulative benefits of building cities and towns using the higher densities that are most compatible with terrestrial mass transit. Barring some new fundamental breakthrough energy source, a sustainable planet will require a network of something that looks like trains. There is, nonetheless, a historical irony in pinning humanity’s hopes for post-capitalist survival on a technology that once heralded the arrival of capitalist empires.

The history of American Railroads’ transition from disruptive corporate behemoths to necessary semi-public infrastructure can inform new struggles around new disrupters. Just like the railroad barons of the late nineteenth century, Amazon, Apple, Alphabet, Facebook, and Microsoft command vast logistics networks with little-to-no democratic input. A hundred years ago, when the Union Pacific railroad came to town, your town changed forever. Being “online” then meant essentially the same thing it means today: you and everyone around you become intimately connected to a vast network of information and money. A web that can bring unprecedented prosperity or carnage depending on who you are and how well you can leverage change to your advantage. Asking why trains are the beloved technology of today’s leftists teaches us how to make future demands against companies like Amazon.


Trains were once the quintessential capitalist technology. In the first volume of Capital, Marx describes a growing network of factories connected by telegraph lines and railroads “in which the laborer becomes a mere appendage to an already existing material condition of production.” John Ruskin, a bourgeois cultural commentator contemporary to Marx, agreed that those who rode the train were treated like parcels rather than people. The historian Wolfgang Shivelbusch said of both men’s work that they were lamenting the loss of “what one might call the ‘esthetic freedom of the pre-industrial subject.’” Walking and horse-powered rides were bumpy, slow and dangerous; however, once the horse shit and creaking wheels of the carriage ride are compared to the steam and steel of the railroad, the former become the relics of a slower, more authentic past.

Transportation machinery meant that those who controlled the machinery got to say where people lived and where they could go. At the continental scale, according to the demographer Katerine J. Curtis White, the presence of railroads in Midwestern counties acted as a colonizing force. Railroad companies would even sell one-way “exploration tickets” that included the price of new land out west. As the frontier closed, railroads consolidated their lines to a few chosen winners.

If tiny electric scooters strike fear and anger in the hearts of today’s urbanites, one can only imagine what it was like seeing a train for the first time: Here is a screaming beast that shakes the earth and, once it stops, disgorges strange people, animals, and all sorts of freight. Like an army, your heart might swell with pride or get caught in your throat depending on whether you see it as your defender or your enemy. For the Amerindian tribes, the railroad was a clarion call for white settlement, instantly recognized for the threat it posed. When the transcontinental railroad reached deep into the prairie lands, Civil War veterans were hired by the railroad companies to shoot at warriors defending their land. The names given to railroad stops held power. The land was no longer Ouiatenon, it was Terre Haute. Today something similar happens when neighborhoods like the historically black Fruit Belt neighborhood in Buffalo, New York suddenly show up on Google Maps under real estate agents’ preferred name, “Medical Park.”

Transit is so intimately connected to power that Marx carved out a special place for transportation and communication technologies in the advancement of capitalism. Both industries are essential in getting commodities and consumers to the same place. They also, as he wrote in volume one of Capital, instigate “special fluctuations in the markets” that result in “the sudden placing of large orders that have to be executed in the shortest possible time. The habit of giving such orders becomes more frequent with the extension of railways and telegraphs.” Anyone in the software business will recognize this as “crunch,” and warehouse workers from Japanese auto plants to Midwest Amazon fulfillment centers know it as “just in time inventory processing.”

Railroads’ centrality to capitalism may in fact be the only thing Karl Marx and Ayn Rand agreed on. Rand loved the train for its ability to destroy esthetic freedom in the name of profitable efficiency, but also for its role as an “avenging angel,” as Kevin Baker put it in Harper’s. Atlas Shrugged contains within it no fewer than three different instances where scores of people die in horrific train accidents. Teachers, politicians, and lawyers are sentenced to immolation, asphyxiation, and drowning in the river below the tracks for reasons related to their inability to exercise greed properly. Trains kill Rand’s characters in the same way that a plant drops leaves in a dry spell: because the greater good of the future requires a culling.

Trains for the working class were the harbingers of freedom, democracy, and westward expansion. The train afforded city planners the option to build out and decentralize, giving working people the option to live more than a few minutes’ walk away from where they toiled. For Americans it also meant more people could come together in conventions and rallies. Sarah H. Gordon in Passage to Union wrote, “by 1840 improved transportation by both railroad and steamboat made possible the mass conventions which have chosen presidential candidates ever since.” The nationalist populism of Andrew Jackson would not have been possible without the train, and a sanitized version of that populism was inferred by Barack Obama after his first win when he toured the country by train like a Gilded Age politician.

Trains represent a kind of power that is reassuringly obvious when compared to the blackboxed power of algorithms

Sarah H. Gordon notes that the construction and conspicuous naming of ornate “union stations” had a dual purpose: “While it meant that more than one railroad used the station, therefore uniting the services of many railroads, it also referred to the old idea of uniting the country through a railroad network.” In some sense, the Left loves trains because of a different kind of union story. Unionized train conductors and engineers brought the country to its knees in the Great Strike of 1877 which, according to the labor historian Nick Salvatore, was “an impressive display of cross-ethnic unity.” More localized displays of labor’s power were common occurrences all across the country well past the first world war and could be called in sympathy for other workers in different industries. A technology that served the rich was made into an egalitarian utility, something that unionized conductors could start and stop in support of labor’s demands.

Today, the train is a popular literary symbol of both relentless time and nonconsensual collectivity because it literally imposes these social forces. Think of Bong Joon-ho and screenwriter Kelly Masterson’s 2013 hit Snowpiercer, and Leo Tolstoy and his Anna Karenina, in which the ceaseless, forward momentum of the train (or time) reminds us that doing nothing has its own set of consequences. For leftists in 2019 this message is equal parts medicine and chicken soup: Getting on the train means getting to the platform on time and going to the same place as everyone else; the train is not only hurtling you forward, it is transporting all of us together whether we like it or not.


The train is loved by socialists for the same reason anything becomes socialist: because people made it that way. For the capitalist, trains are a source and symbol of centralized power; to own a railroad in the 1860s was to have a say in the fate of anything that moves, whether it be wheat, oil, or people. The same can be said of the biggest companies in the world today: Amazon, Walmart, and Alphabet together move and serve most of the nation’s goods, services, and digital information. Transportation and communication — not production or manufacturing — crowns the oligarchy.

In their recent book The People’s Republic of Walmart, Leigh Phillips and Michal Rozworski argue that big box stores can be understood as massive planned economies on the scale of the USSR. Walmart sets prices, negotiates with importers and producers, and monitors their locations and website to anticipate where inventory is needed. Amazon goes further, “using the fruits of modern IT to distribute consumer goods. In short, Amazon is a master planner.” Phillips and Rozworski see companies like Amazon and Walmart as ripe fruit ready to be picked from the tree of capital and deposited into the hands of the people. Amazon is so groundbreaking in the logistics game that they crown Jeff Bezos the “Stalin of online retail.”

Amazon, for all its faults, is unmistakably similar to the universal delivery system described in Looking Backwards, the Victorian utopian romance novel that inspired anarchist reformers like Ebenezer Howard and others to redesign cities to be healthier and more livable. The novel’s central premise is simple: a man named Julian West falls asleep in 1887 and wakes up in the year 2000; his caregivers, the Leete family, describe this brand new world to him. Edith Leete takes him to their Ward’s central store, which Julian recognizes as “merely the order department of a wholesale house.” Edith agrees, describing massive, centralized warehouses where “orders are read off, recorded, and sent to be filled, like lightning.” No matter how remote your home may be, a series of pneumatic tubes and electronic communication devices are capable of bringing whatever you need.

What is very much not like Amazon is that everyone in the country gets the same salary, including the clerks in the stores and the workers in the warehouses. The savings in distribution costs are fairly distributed to everyone: When something becomes more efficient, the savings go on to citizens, not the owner of the system. As Philips and Rozworski conclude, “It is not enough to say, ‘Nationalize it!’ to Amazon and Walmart.” There’s little gained by taking the same data, collected in the same secretive manner, and changing the owner from Jeff Bezos to Donald Trump or even Bernie Sanders. The whole point of socialism isn’t just to nationalize railroads or server farms, it is to change them such that they serve the needs of people not profit.

Nationalization without socialism is the reason the increasing demand for trains has not translated into a better system. By changing the dominant form of transportation from rails to roads in the early 20th century, capital dealt a serious blow to the nation’s burgeoning and militant multi-racial communism. While auto plant workers unionized, and stopped work on car production more than once, they could not instantly stop all transportation in a city the way streetcar conductors did. Within cities, race-segregated streetcar suburbs, and then car-oriented development patterns offered decentralization and neighborhood-level segregation.

Nationalization without socialism is the reason the increasing demand for trains has not translated into a better system

In spite of increasing demand, we now live with an anemic national rail system. Unlike highways or airports, passenger rail no longer has an extensive lobbying network in state and federal governments — all forms of transportation require billions in subsidies, but only rail has to regularly beg for it. Amtrak (like the post office) is still expected to derive most of its revenues from the services it provides, thus keeping it tied to the whims of market forces. More than a mere inconvenience, the dilapidation of rail systems — everything from the New York City subway, to Amtrak, to your local commuter rail (if you’re lucky enough to have one) — feels like a betrayal. Unlike airlines, with their ever-expanding gradients of classes, and car brands’ embrace of status posturing, trains offer a fairly flat hierarchy. There’s something profane about trains being used as terrestrial cruise ships: Like a bar made from the detritus of an old hardware store, a luxury train ride such as the thousand-dollar Rocky Mountaineer may be nice, but its enjoyment is tainted by the knowledge that something far more useful to more people once existed from the constituent parts.

A nationalized Amazon, or a Google that did not significantly alter how it made its money, would be just as bad, maybe even worse, if the legitimate violence of the state and user surveillance were united more than they already are. Similar to transit-induced gentrification, a state-owned Amazon one-day fulfillment center would do more to raise property values than provide needed services. We can imagine a Facebook that has been so sanitized and automated that it has lost all ability to draw together disparate groups for comfort and solidarity; an Amazon that provides no useful or cheap products, just a mix of luxury goods and a handful of staples that they have a protected monopoly on selling.


The interpretability of trains, however, should lead us to be optimistic about the political malleability of even the most disruptive technologies. Railroad systems are massive, powerful things that were wrought on the landscape by colonists and capitalists but eventually, albeit unevenly and with mixed success, brought into public ownership. Trains are raw, loud, obvious embodiments of power and have been around longer; they can offer cautionary tales and inspiration for future socialization schemes. Trains represent a kind of power that is reassuringly obvious when compared to the blackboxed, mysterious Foucauldian power of algorithms and hidden-away server farms.

The establishment of the railroads was the precursor and main economic instigator of standardized global time zones. With great scale comes an even greater appetite for control over all variables in the system, and so railroads also drew law, property, economic theories, identity, and nation-states into their orbit. As a result, even the most remote towns without a railroad stop were subject to the ripple effects. Property laws became more stringent, centralized production and mass distribution superseded regional production. And so when we think of how trains can inspire us to demand a new and better world, we have to think about what kinds of laws, habits, and cultures our phones and their connected services afford, and which ones might be brought to bear on the public good. What can we do now, so that in the near future our phones and web services, while as imperfect as Amtrak, could still indicate a better future to come? Can the smartphone elicit the same kind of collectivism that terrifies George Will?

Bringing about what Phillips and Rozworski call the “Socialist Anthropocene” would require more institutional than technological innovation. The requisite information technology exists to coordinate across offices, distribution centers, and manufacturing sites; how that planning is accomplished democratically is less certain. There are rays of democratic hope in the walk-outs and unionization efforts at tech and media companies. Just like the train, the Amazon fulfillment center and the phone are labor’s choke points for capital. When these distribution nodes fail, the system comes to a stop. Wielding that power and finding new and durable means of democratizing is not only a better world to imagine, it might be the only way to get there.

05 Sep 14:51

How to plan and write pieces of any length

by Josh Bernoff

What’s the best process to write something? The answer depends, more than anything else, on how long it is. You need a plan, but that plan depends on the length of what you’re aiming for. (This advice is for nonfiction. If you write fiction, I’m curious how much the process differs for you.) This post … Continued

The post How to plan and write pieces of any length appeared first on without bullshit.

05 Sep 14:51

Toronto’s Reboots Vision Zero to Stop All Vulnerable Road User Deaths

by Sandy James Planner
people brasil guys avpaulista
people brasil guys avpaulista Photo by Kaique Rocha on Pexels.com

Toronto Star reporter Ben Spurr has continued the conversation about road violence against vulnerable road users in that city. It’s been a surprisingly uphill battle in Toronto where 190 pedestrians and 7 cyclists have died in the past five years. But Toronto is not the big city leader in road deaths in Canada. Vancouver is.

The City of Toronto has 2.2  road deaths per 100,000 population. Vancouver actually has a higher rate than the City of Toronto, at 2.4 road deaths per 100,000. And Montreal’s rate is almost half, at 1.3 road deaths per 100,000.   You can take a look at the statistics here.

The residents of Toronto have protested against road violence and demanded change in making their city streets and places safer for vulnerable road users. People who have lost loved ones due to road violence have organized and protested in groups such as Friends and Families for Safe Streets.

The City of Toronto originally implemented a 2016 Vision Zero plan that did not aim at the complete reduction of road deaths and serious injuries, but rather a percentage of less fatalities.

Toronto soon realized the folly of that concept as the “the number of fatal collisions in the past 5 years has seen a general increase compared to the previous 5 years. The upward trend is most notably seen in pedestrian fatalities.” 

In a June 2019 reboot of Vision Zero called  “2.0”-Road Safety Update ,Toronto’s Engineering Staff got serious about the safe systems approach, with Council adopting a speed management strategy, road design improvements, and an education and engagement plan. As well two pedestrian death traps were identified for special attention: mid block crossings (responsible for 50 percent of pedestrian deaths); and vehicles turning through crosswalks (causing 25 percent of deaths). The City also directly stated that their goal was now no deaths or serious injuries on the road, which is the true  Vision Zero approach.

Toronto’s data on road violence also mirrored that of  Vancouver’s~the majority of pedestrians killed are over 55 years old. But like Vancouver, driver education and the design and timing of intersection crossings still  does not reflect the specific requirements of seniors or those with accessibility needs.

The City of Toronto’s analysis identified slowing road speeds as potentially preventing almost 20  percent of fatalities and serious injuries, with road design modifications and signalization of mid-block crossings reducing mortality by another 23 percent. Protected cycling lanes and pedestrian leading intervals (head start signals) could mitigate another 14 percent of deaths/serious accidents.

It is always much easier to finger point at the vulnerable road user as being the pesky problem in any vehicular crash. Throughout the 20th century laws have habituated low  penalties to drivers who kill or seriously maim pedestrians or cyclists, almost as if road violence was accepted collateral for standardized vehicular movement.

Despite the victim blaming about inattentiveness of pedestrians and cyclists, Toronto Police point out that between 2007 and 2017  65 percent of victims killed were over 55 years old, and most of that cohort would not be owning cell phones.

Data collected and interpreted by the Toronto Star  shows Toronto statistics that are similar to Vancouver’s. In 45 percent of crashes that are fatal or causing serious injury, the pedestrian had the right of way. Like Vancouver pedestrian collisions increase in November with shorter days. In Toronto analysis shows that 75 percent of severe pedestrian accidents happen during good weather conditions, when travel is faster.

One of the struggles for Toronto’s Mayor John Tory is that the  “two main goals for his administration’s road policies: easing traffic congestion, and making streets safer through Vision Zero, which he has backed at council” may actually work against each other. Traffic congestion slows vehicular speed, allowing for more driver reaction time and less serious injuries in crashes with vulnerable road users. Congestion also facilitates the use of alternative ways of moving,  such as the King Street streetcar and buses.

It is clear that there needs to be a cultural shift in favour of recognizing pedestrians and cyclists as equal road users that have the right to travel safely on the city’s streets and public spaces. And that needs to happen now.

Toronto’s General Manager of Engineering Barbara Gray sums up the civic approach to Vision Zero in this YouTube video below.

 

 

05 Sep 14:51

Hello, Computer: Inside Apple’s Voice Control

by Steven Aquino

This year’s Worldwide Developers Conference was big. From dark mode in iOS 13 to the newly-rechristened iPadOS to the unveiling of the born-again Mac Pro and more, Apple’s annual week-long bonanza of all things software was arguably one of the most anticipated and exciting events in recent Apple history.

Accessibility certainly contributed to the bigness as well. Every year Apple moves mountains to ensure accessibility’s presence is felt not only in the software it previews, but also in the sessions, labs, and other social gatherings in and around the San Jose Convention Center.

“One of the things that’s been really cool this year is the [accessibility] team has been firing on [all] cylinders across the board,” Sarah Herrlinger, Apple’s Director of Global Accessibility Policy & Initiatives, said to me following the keynote. “There’s something in each operating system and things for a lot of different types of use cases.”

One announcement that unquestionably garnered some of the biggest buzz during the conference was Voice Control. Available on macOS Catalina and iOS 13, Voice Control is a method of interacting with one’s Mac or iOS device using only your voice. A collaborative effort between Apple’s Accessibility Engineering and Siri groups, Voice Control aims to revolutionize the way users with certain physical motor conditions access their devices. At a high level, it’s very much a realization of the kind of ambient, voice-first computing dreamed up by sci-fi television stalwarts like The Jetsons and Star Trek decades ago. You talk, it responds.

And Apple could not be more excited about it.

“I Had Friggin’ Tears in My Eyes”

The excitement for Voice Control at WWDC was palpable. The sense of unbridled pride and joy I got from talking to people involved in the project was unlike anything I’d seen before. The company’s ethos to innovate and enrich people’s lives is a boilerplate talking point at every media event. But to hear engineers and executives like Herrlinger gush over Voice Control was something else: it was emotional.

Nothing captures this better than the anecdote Craig Federighi gave at John Gruber’s live episode of his podcast, The Talk Show. During the segment on Voice Control, Federighi recounted a story about an internal demo of the feature he saw from members of Apple’s accessibility team during a meeting. The demonstration went so well, he said, that he almost started to cry while backstage.

“It’s one of those technologies…you see it used and not only are you amazed by it, but you realize what it could mean to so many people,” Federighi said to Gruber. “Thinking about the passion [of] members of the Accessibility team and the Siri team and everyone who pulled that together is awesome. It’s some of the most touching work we do.”

Federighi’s account completely jives with the sentiment around WWDC. Everyone I spoke to – be it fellow reporters, attendees, or Apple employees – expressed the same level of enthusiasm for Voice Control. The consensus was it is so great. I’ve heard the engineering and development process for Voice Control was quite the undertaking for workers in Cupertino. It took, as mentioned at the outset, a massive, cross-functional collaborative effort to pull this feature together.

In a broader scope, the emotion behind seeing Voice Control come to fruition lies not only in the technology itself, but in its reveal too.

That Apple chose to devote precious slide space, as well as a good chunk of stage time, to talk up Voice Control is highly significant. Like with the decision to move the Accessibility menu to the front page of Settings in iOS 13, the symbolism is important. Apple has spent time talking about accessibility at various events over the last several years, and for them to do so again in 2019 serves as yet another poignant reminder the company cares deeply for the disabled community. It is a big deal that Apple highlights accessibility alongside the other marquee, mainstream features at a place that’s the biggest event in the Apple universe every summer.

“My success is completely determined by the technology I have available to me,” said Ian Mackay, who became a quadriplegic as the result of a cycling accident and who starred in Apple’s Voice Control video shown at WWDC. “Knowing that accessibility is important enough for Apple to highlight at a huge event like WWDC reaffirms to me that Apple is interested and engaged in furthering and enhancing these technologies that give me independence.”

A Brief History of Voice Control

The Voice Control feature we know today has lineage in Apple history. One of the banner features of the iPhone 3GS,1 released in 2009, was Voice Control.2

The differences are vast. The version that shipped ten years ago was rudimentary, replete with a rudimentary-sounding voice. At the time, Apple touted Voice Control for its ability to allow users “hands-free operation” of their iPhone 3GS; Phil Schiller talked up the “freedom of voice control” in the press release. The functionality was bare-bones: you could make calls, control music playback, and ask what’s playing. In the voice computing timeline, it was prehistoric technology.3 Of course, Voice Control’s launch with the iPhone 3GS in June 2009 pre-dated Siri by over two years. Siri wouldn’t debut until October 2011, with the iPhone 4S.

The Voice Control of 2019, by contrast, is a supercomputer. Making phone calls and controlling music playback is par for the course nowadays. With this Voice Control, you quite literally tell your computer to wake up and do things like zoom in and out of photos, scroll, drag and drop content, drop a pin on a map, use emoji – even learn a new vocabulary.

When talking about emerging markets or new technologies, Tim Cook likes to say they’re in the “early innings.” Voice-first computing surely is in that category. But once you compare it to where it was a decade ago, the progress made is astoundingly obvious. The Voice Control that will ship as part of macOS Catalina and iOS 13 is light years ahead of its ancestor; it’s so much more sophisticated that it’s exciting to wonder how the rest of the voice-first game is going to play out.

Voice Control’s Target Audience

The official reason Apple created Voice Control is to provide yet another tool with which people with certain upper body disabilities can access their devices.

Voice Control shares many conceptual similarities with the longstanding Switch Control feature, first introduced six years ago with iOS 7. Both enable users who can’t physically work with a mouse or touchscreen to manipulate their devices with the same fluidity as those traditional input devices. They are clearly differentiated, however, largely in their respective interaction models. Where Switch Control relies solely on switches to navigate a UI, Voice Control ups the ante by doing so using only the sound of your voice.

When you build for the margins, you actually make a better product for the masses.

There is also opportunity for Voice Control to have relevance beyond the original intended use case. It might find appeal to people with RSI issues, as using one’s voice to control your machine would alleviate pain and fatigue associated with using a keyboard and pointing device. Likewise, others might simply find it fun to try Voice Control for the futuristic feeling of telling their computer to do stuff and watching them respond accordingly. Either way, it’s good that accessibility get more mainstream exposure.

As Mackay told me in our interview: “I feel Sarah Herrlinger said it best when she said, ‘When you build for the margins, you actually make a better product for the masses.’ I’m really excited to see how those with and without disabilities utilize this new technology.”

How Voice Control Works

The essence of Voice Control is this: you tell the computer what to do and it does it.

Apple describes Voice Control as a “breakthrough new feature that gives you full control of your devices with just your voice.” The possibilities for what you can do are virtually endless. Pretty much any task you might throw at your MacBook Air or iPad Pro, chances are good Voice Control will be able to handle it.

There is somewhat of a learning curve, insofar as you have to grasp what it can do and how you speak to it. By the same token, harnessing Voice Control is decidedly not like using a command line. The syntax has structure, but isn’t so rigid that it requires absolute precision. The truth is Voice Control is flexible; it is designed to be deeply customizable (more on this below). And of course, emblematic of the Apple ecosystem, the fundamentals of Voice Control work the same way across iOS and macOS.

Voice Control also is integrated system-wide,4 so it isn’t necessary to be in a particular app to invoke a command. Take writing an email, for instance. Imagine a scenario in which you’re browsing the web in Safari and suddenly remember you need to send an email to someone. With Safari running on your iPad (or iMac or iPhone), you can tell Voice Control to “Open Mail” and it’ll launch into Mail. From there, you can say “Tap/Click New Message” and a compose window pops up. Complete the metadata fields (send-to, copies, subject) with the corresponding commands, then dive into composing your message in the body text field. When you’re finished, saying “Tap Send” sends the message.

As the axiom goes, in my testing “it just works.” To initiate Voice Control, you tell the computer to “wake up.” This command tells the system to get ready and start listening for input. Basic actions in Voice Control involve one of three trigger words: “Open,” Tap,” and “Click.” (Obviously, you’d use whichever of the last two was appropriate for the operating system you’re on at the moment). Other commands, such as “Double Tap,” “Scroll,” and “Swipe,” are common actions as well depending upon context. When you’re done, saying “go to sleep” tells the computer to drop the mics and stop listening.

Using Voice Control's grid to zoom in on a specific area in Maps.

Using Voice Control’s grid to zoom in on a specific area in Maps.

In addition to shouting into the ether, Voice Control includes a numbered grid system which lets users call out numbers in places where they may not know a particular name. In Safari, for example, the Favorites view can show little numbers (akin to footnotes) alongside a website’s favicon. Suppose MacStories is first in your Favorites. Telling the computer to “Open number 1” will immediately bring up the MacStories homepage. However many favorites you have, there will be a corresponding number for each should you choose to enable the grid (which is optional). You can also say “show numbers” to bring it up too. The grid is pervasive throughout the OS, touching everything from the share sheet to the keyboard to maps.

The grid system option lives in a submenu of Voice Control settings called Continuous Overlay, which Apple says “speeds interaction” with Voice Control. In addition to the grid system, there also are choices to show nothing, only show item numbers (sans grid), or item names.

You can optionally enable persistent labels for item names or numbers.

You can optionally enable persistent labels for item names or numbers.

Beyond basic actions like tapping, swiping, and clicking, Voice Control also supports a range of advanced gestures. These include long presses, zooming in and out, and drag and drop. This means Voice Control users can fully harness the power and convenience of “power user” features like 3D Touch and Haptic Touch, as well as right-click menus on the Mac. Text-editing features like cut, copy, and paste and emoji selection are also supported. Some advanced commands include “Drag that” and
“Long press on [object].”

Voice Control can be configured by the user to create a customized experience that’s just right for them. On iOS and the Mac, going to Voice Control in Settings shows a cavalcade of options. You can enable or disable commands such as “Show Clock” and even create your own. There are numerous categories, offering commands to everything from text selection to device control (e.g. rotating your iPad) to accessibility features and more. Voice Control is remarkably deep for a 1.0.

Customization of Voice Control commands.

Customization of Voice Control commands.

One notable section in Voice Control settings is what Apple calls Command Feedback. Here, you have options to play sounds and show hints while using Voice Control. In my testing, I’ve enjoyed having the latter two enabled because they’re nice secondary cues that Voice Control is working; the hints are especially helpful whenever I get stuck or forget what to say. It’s a terrific little detail that’s visually reminiscent of the second-generation Apple Pencil’s pairing and battery indicator. My only complaint would be that I wish the hint text were bigger and higher contrast.5 A small nit to pick.

Another noteworthy section is Vocabulary. Tapping the + button allows users to teach Voice Control words or phrases it wouldn’t know otherwise. Where this can come in particularly handy is when using industry-specific jargon often. If you’re an editor, for example, you could add common journalistic shorthands for headline (“hed”), subhead (“dek”), lead (“lede”), and paragraph (“graf”), amongst others, to make editing copy and working with colleagues easier and more efficient.

It’s worthwhile spending some time poking around in Voice Control’s settings as you play with the feature to get a sense of its capabilities. As mentioned, Voice Control has tremendous breadth and depth for a first version product; looking at it now, it’s easy to get excited for future iterations.

On a privacy-related note, in my briefings with Apple at WWDC the company was keen to emphasize that Voice Control was built to be privacy-conscious. Audio transmission is local to the device, never sent to iCloud or another server. Apple does, however, provide an Improve Voice Control toggle that “shares activity and samples of your voice.” It is opt-in, disabled by default.

In the Trenches with Voice Control

To describe what Voice Control is and how it works is one way to convey its power and potential, but there is nothing like actively using it to see the kind of impact it has on someone’s life. For Ian Mackay, Voice Control lives up to the hype.

“When I first heard about Voice Control, my jaw dropped. It was exactly what I had been looking for,” he said.

In practice, Mackay finds Voice Control “impressively” reliable, noting that the Siri dictation engine, which Voice Control uses, is “quite accurate and, in my opinion, very intuitive.” He’s pleased Voice Control works the same way cross-platform, as using a universal dictation system in Siri “really lessens the learning curve and lets you focus your understanding on one intuitive set of commands.” This familiarity, he said, is key. It makes using documents and other files a seamless experience as he traverses different devices. This is all due to the continuity inherent to the same dictation engine driving Voice Control everywhere.

Todd Stabelfeldt, a software developer and accessibility advocate who appeared in Apple’s The Quadfather video and gave a lunchtime talk at WWDC 2017, is cautiously optimistic about Voice Control. “I thought like a software developer [when Voice Control was announced],” he said. “The time spent to create, design, write, and most importantly test! [I’m] generally excited, but as I have learned from my wife: ‘Trust but verify.’”

For his part, Stabelfeldt uses Dragon Naturally Speaking for his daily voice control needs, but is excited for Voice Control on iOS, especially for telephone calls and text messages. “With the amount of phone calls, text messages, [and] navigating I do during the day, having Voice Control will make these tasks a little easier and should assist with less fatigue,” he said.

As with all software, however, Voice Control is not so perfect that it can’t be improved. Mackay told me Voice Control falters at times in loud, crowded settings, and on noisy, windy days outside. He’s quick to note ambient noise is a problem for all voice-recognition software, not just Apple’s implementation. “The device has to hear what you’re saying, and although the microphones are great, enough background noise can still impair its accuracy,” he said. A workaround for Mackay is to use Switch Control in places where Voice Control can be troublesome. In fact, he thinks both technologies, which function similarly, complement each other well.

“[Noisy environments] are a great example of how Voice Control and Switch Control can work beautifully in tandem,” he said. “When you are in a noisy area, or perhaps you want to send a more private message, you can use Switch Control to interact with your phone. It also can really speed up your device use by using both technologies. Users will find that some things are faster with switch and some are faster with voice.”

Stabelfeldt echoes Mackay’s sentiment about noisy environments, saying it’s “part of the problem” with using dictation software. He added the Voice Control experience would be better if Apple created “an incredible headset” to use with it.6

Considering Voice Control and Speech Delays

Benefits notwithstanding, the chief concern I had when Voice Control was announced was whether I – or any other person with a speech delay – would be able to successfully use it. In this context, success is measured by the software’s ability to decipher a non-standard speech pattern. In my case, that would be my stutter.

I’ve written and tweeted about the importance of this issue a lot as digital assistants like Siri and Amazon’s Alexa have risen in prominence. As the voice is the primary user interface for these products,7 Voice Control included, the accessibility raison d’être is accommodating users, like me, who have a speech impairment.

Speech impairments are disabilities too. The crux of the issue is these AI systems were built assuming normal fluency, which is a hard enough task given how humans are trying to teach machines to understand other humans. Ergo, it stands to reason that a stutterer’s speech would compound things by making the job exponentially more difficult. The problem lies in the fact that there is not a trivial number of people out there with some sort of speech delay. According to the National Institute on Deafness and Other Communication Disorders, some 7.5 million people in the United States “have trouble using their voices.” We deserve to experience voice interfaces like anyone else to reap the benefits they bring as an assistive technology.

Yet for a certain segment of users – those with speech impairments – there is a real danger in voice-first systems like Siri, Alexa, and yes, Voice Control being perceived as mostly inaccessible by virtue of their inability to reliably understand when someone stutters through a query or command. Exclusion by incompetence is a lose-lose situation for the user and the platform owner.

It’s unfortunate and frustrating because it means the entire value proposition behind voice technology is lost; there’s little incentive to use it if it has trouble understanding you, regardless of the productivity gains. That’s why it’s so imperative for technology companies to solve these problems – I have covered Apple at close range for years so they’re my main focus, but to be clear, this is an industry-wide dilemma. I can confirm the Echo Dot sitting on my kitchen counter8 suffers the same setbacks with understanding me as Siri does on any of my Apple devices. To be sure, Amazon, Apple, Google, Microsoft, all the big players with all the big money have an obligation to throw their respective weights around to ensure the voice-driven technologies of the future are accessible to everyone.

In my testing of Voice Control, done primarily on a 10.5-inch iPad Pro running the iPadOS public beta, I’m pleased to report that Voice Control has responded well (for the most part) to my speech impediment. It’s been a pleasant surprise.

Stuttering, for me, has been a fact of life for as long as I can remember. It will happen, no matter what. But in using things like my HomePod or Voice Control, I have made a concerted effort to be more conscientious of my mindset, breathing, and comfort level. These all are factors that contribute to whether I stutter more severely (e.g. when I’m anxious or nervous), and they definitely play a role in how I use technology. Thus, while testing Voice Control, I’ve constantly reminded myself to slow down and consider what I should say and how I should phrase it.

And it has worked well, all things considered. Voice Control doesn’t understand me with 100 percent accuracy, but I can’t expect it to. It does a good job, though, about 80–90 percent of the time. Whatever work Apple has done behind the scenes to improve the dictation parser is paying off; it has gotten better and better over time.

Herrlinger did tell me at WWDC that, in developing Voice Control, Apple put in considerable work to improve the dictation parser so that it’d handle different types of speech more gracefully. Of course, the adeptness should grow with time.

Overall, the progress is very heartening. No matter the psychological tricks I use on myself, the software still needs to perform at least reasonably well. That Voice Control has exceeded my expectations in terms of understanding me gives me hope there’s a brighter future in store for accessible AIs everywhere.

Voice Control and the Apple Community

During and after the WWDC keynote, my Twitter feed was awash in praise and awe of Voice Control. That it resonated with so many in the Apple community is proof that Voice Control is among the crown jewels of this year’s crop of new features.

Matthew Cassinelli, an independent writer and podcaster who previously worked in marketing at Workflow prior to Apple acquiring them in 2017, is excited for how Voice Control can work with Shortcuts. He believes Voice Control and the Shortcuts app “seem like a natural pairing together in iOS 13 and iPadOS,” noting that the ability to invoke commands with your voice opens up the OS (and by extension, shortcuts) in ways that weren’t possible before. He shares a clever use case. One could, he says, take advantage of Voice Control’s Vocabulary feature to build custom names for shortcuts and trigger them by voice. Although the Shortcuts app is touch-based, Cassinelli says in his testing of Voice Control that any existing shortcuts should be “ready to go” in terms of voice activation.

Beyond shortcuts, Cassinelli is effusive in his feelings about voice-controlled technology as a whole. He feels Voice Control represents a “secret little leap” in voice tech because of the way it liberates users by allowing them near-unfettered control of their computer(s) with just their voice. The autonomy Voice Control affords is exciting, because autonomy is independence. “Now anyone with an Apple device truly can just look at it, say something, and it’ll do it for you,” he said.

Cassinelli also touched on Voice Control alleviating points of friction for everyone, as well as its broader appeal. He notes Voice Control removes much of the “repetitiveness” of using “Hey Siri” as often because it can do so much on its own, and Apple’s facial awareness APIs guard against unintended, spurious input.9

“I suspect a select few will take this to another level beyond its accessibility use and seek out a Voice Control-first experience, whether it be for productivity purposes or preventive ergonomic reasons,” he said. “I can see use cases where I’m using an iPad in production work and could utilize the screen truly hands-free.”

Rene Ritchie, known for his work at iMore and his burgeoning YouTube channel, Vector, told me he was “blown away” by Voice Control at WWDC. Looking at the big picture, he sees Apple trying to provide a diverse set of interfaces; touch-first may be king, but the advent of Voice Control is further proof that it isn’t the one true input method. Ritchie views Apple as wanting to “make sure all their customers can use all their devices using all the different input methods available.”

“We’ve seen similar features before from other platforms and vendors. But having [Voice Control] available on all of Apple’s devices, all at the same time, in such a thoughtful way, was really impressive,” he said.

Like Cassinelli, Ritchie envisions Voice Control being useful in his own work for the sheer coolness and convenience of it. “I do see myself using Voice Control. Aside from the razzle-dazzle Blade Runner sci-fi feels, and the accessibility gains, I think there are a lot of opportunities where voice makes a lot of sense,” he said.

Voice Control’s Bright Future

Of the six WWDCs I’ve covered since going for the first time in 2014, the 2019 edition sure felt like the biggest yet. Much of the anticipatory lull had to do with the extreme iPad makeover and the pent-up demand for the new Mac Pro. The rumor mill predicted these things; the Apple commentariat knew they were coming.

Voice Control, on the other hand, was truly a surprise. Certainly, it was the one announcement that tugged at the heartstrings the hardest; some of the biggest applause at the keynote came right after Apple’s Voice Control intro video ended. As transformative as something like iPadOS will be for iPad power users, you didn’t hear about Craig Federighi cutting onions10 over thumb drive support in Files.

It was important enough to merit time in the limelight on stage – which, for the disabled community and accessibility at large, is always a huge statement. The importance cannot be overstated; to see the accessibility icon on that giant screen boosts so much awareness of our marginalized and underrepresented group. It means accessibility matters. To wit, it means disabled people use technology too, in innovative and life-altering ways – like with using Voice Control on your Mac or iPhone.

From an accessibility standpoint, Voice Control was clearly the star of the show. When it comes to accessibility, Apple’s marketing approach is consistent, messaging-wise, with “bigger” fish like milestone versions of iOS and hardware like the iPhone. They love every new innovation and are genuinely excited to get them into customers’ hands. But this year, I’ve never seen anything like the emotion that came from discussing and demoing Voice Control. It still was Apple marketing, but the vibe felt very different.

It’s early days for Voice Control and it has room to grow, but it’s definitely off to a highly promising start. Going forward I’ll be interested to see what Apple does with the feedback from people like Ian Mackay and Todd Stabelfeldt, who really push this tech to its limits every single day. In the meantime, I believe it’s not hyperbolic to say Voice Control as it stands today will be a game-changer for lots of people.


  1. The 3GS was also significant for bringing discrete accessibility features to iOS (née iPhone OS) for the first time. There were four: VoiceOver, Zoom, White-on-Black, and Mono Audio. ↩︎
  2. Apple likes recycling product names. See also: the recently-departed 12-inch MacBook. ↩︎
  3. Voice Control still exists! Users can determine the function of the side button when held down; one of the options is Voice Control. It can be set by going to Settings ⇾ Accessibility ⇾ Side Button. ↩︎
  4. On the Mac, it’s found via System Preferences ⇾ Accessibility ⇾ Display. On iOS, it’s Settings ⇾ Accessibility ⇾ Voice Control. ↩︎
  5. Speaking of contrast, SF Symbols are a triumph. ↩︎
  6. Apple has sold accessibility-specific accessories, online and in their retail stores, since 2016. ↩︎
  7. In Apple’s case, Siri does offer an alternative UI in the form of Type to Siri↩︎
  8. I bought one to pair with the Echo Wall Clock. It’s a nice visual way to track timers when I cook, and of course, the clock itself is useful for telling time. ↩︎
  9. On iOS devices with Face ID, turning your head away disables the microphones. If someone walks into the room, your conversation won’t be misinterpreted as input. ↩︎
  10. In the second sense of the word. ↩︎

Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it’s also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it’s made in Italy.

Join Now
05 Sep 14:51

Shortest Longest Red

by Joshua Kerievsky

Ward Cunningham stopped the exercise. He'd been teaching Test-Driven Development (TDD) to a bunch of student but they were spending far too much time trying to make their failing tests pass. Writing large, overly complicated unit tests isn't how we test-drive code. Ward asked the students to restart the exercise but this time he said there would be a competition: "The pair of students who finish the exercise AND have the shortest longest period in the red wins."

Shortest longest red? What's that? In TDD, when a test fails, we say it's red. Your next step is to make the test go green by writing the code to pass the test. It should not take too long to go from red to green, since you are only test-driving micro-pieces of behavior. Refactoring happens after you test is green. We implement this red, green, refactor cycle over and over again as we test-drive solutions. We can measure the time during which we had the longest period of red (failing tests). If we can compare our longest red to that of every other pair doing the exercise, we can determine who had the shortest longest red.

To measure how long they were "in the red", Ward suggested that one programmer in the pair take on the additional responsibility of being the time keeper. They'd keep time of any period during which one or more tests were red. By the end of the exercise, every pair had finished programming the solution and had data about their longest period in the red. Sure enough, one pair had the shortest longest red. But they said they felt guilty for winning. "We think we cheated because we would start the timer once we had a failing test but if it went past a minute and still wasn't green, we would simply revert the code and start again." Ward assured them that such an approach was not cheating. They were learning to test-drive code in small steps.

A TDD graph from Industrial Logic's eLearning
Red vertical bars depict duration of test failures

TCR (Test && Commit || Revert)

If you are a TDD practitioner and haven't heard about Kent Beck's new workflow called TCR, you ought to take a look. TCR is different from TDD. It's more radical for starters. The "R" in TCR means Revert. If you write code that doesn't pass your tests it is automatically deleted. Goodbye code, let's try that again! It's a little like those winning students in Ward's TDD class who reverted when it took too long to get to green. But with TCR, a failing test is never the goal. Instead, you endeavor to make small, incremental steps that keep your tests running green. Kent writes:

If you don’t want a bunch of code wiped out then don’t write a bunch of code between greens. Yes it can be frustrating to see code disappear but you almost always find a better, surer, more incremental way of doing the same thing.

Taking a step in which you "write a bunch of code" reveals you have more to learn about what Kent calls the better, surer, more incremental way. Ward's students needed to learn that lesson and his shortest longest red competition helped. TCR goes further. It add incremental guard rails to help you program in that better, surer, more incremental way. And it insists that the time of your shortest longest red is zero.

The post Shortest Longest Red appeared first on Industrial Logic.

05 Sep 14:51

2019 School Year: Use Management Tools to Handle College or University

by Guest Author

Is the back-to-school stress getting to you? All-night cram sessions and excessive caffeine consumption are less than distant memories from the April exam season past. You know you want to make the most of this school year, but when you already have to choose between school work, sleep, part-time jobs, and a social life – but what if you want to fit in another thing? What if that thing is a business you want to run on campus? You’re full of ideas, just not full of time!

When you find your energy running low and you’re wondering how you’ll keep your head above water, thinking, “How am I ever going to have the time to manage it all?” Boldgrid swoops in with the tips to save the day!

Have you noticed there are a few students who always seem to have everything under control? Not only are they getting straight A’s, but they never appear stressed or over-worked. You may be wondering what they’re doing differently. The secret is no secret at all. It just takes planning and establishing standard operating procedures around how you use technology and spend your time. The same way you run a business is how you can manage your course-load, your social life, and so much more. Here are a few old-fashioned tricks you can still use today to help you reach your educational and business goals this year.

Save Some Mental Energy With a Research Wiki

As a student you’re getting bombarded with research and writing assignments all day. How can you keep all of these various snippets and pages and chapters of text organized? Not only do you have to read all of this stuff, you need to create your own insights about them to impress your professors.

Wikis are old school. But one of the most popular websites in the world, Wikipedia, proves that wikis are still viable for their ability to organize and categorize information. Did you know you can create your own personal wiki for managing projects, research, or creative ideas?

You can actually use the same exact system that Wikipedia uses. It’s called MediaWiki. It’s free and open-source and can be installed on virtually any web hosting platform. Furthermore, if you want your wiki to be private, you can use a program like MAMP to run your wiki on your computer. 

How does having a wiki help you focus? Placing all of the information about your project or research paper into your wiki gets it off your mind and into a system you can refer to later. This means you save some mental energy and avoid the mental fog that can result from over-exertion. Don’t forget that your brain is like a muscle. It will get stronger when you use it, but can also tire out. 

Get Your Files Under Control

You’ll never succeed with a campus business and handling schoolwork and having a social life and maybe sleeping in once in a while if you have files scattered all over your workspace. Get yourself an organization scheme that works for you and stick to it.

If you have too many files and folders scattered about, you’ll have trouble concentrating on work you need to do and might get behind on unpaid bills and tasks. As you’ll see in the video below, simply having an inbox to capture important documents to be filed away later can save you from the mental shock of a messy office.

Have a Different Email Account For Different Senders

You can actually have a different email account for different purposes. Is that crazy? Take a moment to consider the benefits. Imagine if every time you had to sign up for a service or get a free PDF you can use a special email account just for those unimportant notifications. Imagine further that you had a different inbox for friends and family. Likely, if you’re in school, you already have a school email account. Keep that for school-related tasks only. Set up another email account for personal items and then another for business-related items.

In some cases, these separate inboxes are referred to as sender filters, but they’re much more than that. Having different mailboxes for different administrative areas of your life can help you avoid digital clutter so you can focus on more important tasks. 

Yes, you can set up email filters in your email app. But it takes time to set up filter parameters and then you have to test them. And sometimes they still don’t work right! Having different email addresses for different purposes will save you a lot of time and energy.

Get a Task Manager

You may think task managers are reserved for development teams at high tech companies, but you couldn’t be more wrong. The same procedures used at the big companies are easily applicable to you and yours needs.

You can use Sprint planning to divide up your hefty course work and manage large projects, you can use tagging features to tag individuals in group projects, you can use calendars to handle your social and work calendars. Every task manager has different purposes, but Notion is a great example of an all-inclusive task manager with a ton of templates and allows you to make multiple workspaces. Plus, it’s free!

Set up Slack Channels

Setting up a Slack channel for your friends, your group projects, or to join in with your colleagues for your part-time job is a great way to communicate effectively with groups of people. You can run the app on your phone or computer, so it’s always accessible. Any ideas on the go for your group project? Ping your group! Saw something hilarious in the campus library? Message your friends! Need to switch a shift? Post to your team! Slack lets you create multiple workspaces and easily flip between them. Once you’re done with a group project, delete the workspace to keep your workspaces clean.

Own Your Own Piece of The Internet

Having your own domain name and website is like carving out a small piece of the Internet just for you and your business. You might be thinking: well, I have a lot of followers on Instagram, can’t I just market to them? Sure you can! But pairing your successful Instagram with a website or direct channel to you will help elevate your personal brand. When you’re applying to summer jobs or post-graduation jobs, having a website with your portfolio, blogs, podcast, whatever it is you do on the side, is a great way to stand out in applications.

You also have the opportunity to build yourself an email list and grow your network and following. The true value of an email list is the ownership factor. Your email list is yours forever. As social platforms rise and fall, you may lose connection with your followers. But with email marketing, you get to market to your subscribers in the way you deem appropriate. This means that when you’re looking for a job post-grad, you can send out an email campaign to your followers!

Setting up a website is a simple enough process:

  • Get a domain name
  • Build a site
  • Embed an email marketing sign up form (like through MailChimp)
  • Start creating valuable content
  • Direct your social media followers to your website
  • Promote your hard work and build your personal brand!

When you have your own domain and website, it’s like owning a small country where you make the rules. Considering how often the Internet landscape shifts and changes direction, having a space where you make the rules is a very good thing.

There are so many things you can do to help manage your school-work-life-entrepreneur balance. It’s not easy and it’s okay to try new things and not know exactly which direction you’re headed. All of the above will help you manage whatever it is you choose to do. The rest is up to you!

Chris Maiorana – CONTENT MARKETING COORDINATOR, BOLDGRID

Chris is a content marketing coordinator at BoldGrid and occasional movie critic.

05 Sep 14:51

Why the Total Dossier on Everybody Must Stop

by Todd Weaver

Where people go, what people do, and who talks to whom, should be kept private.

There is a total dossier on everybody, and you are likely a willing, yet oppressed, participant. Willing because of how convenient it is; oppressed, because everything you do is under the complete control of others.

Gang-stalking by corporations must stop. We have seen before what can happen when all the whereabouts of all people are tracked. The German Secret Police (the Stasi) had over 250,000 spies, who served in a four-decade long despotic regime over a population of 17 million, committing crimes against their own people–crimes that were viewed to be as brutal as those perpetrated by their Nazi predecessors–reminds us what oppression is. We have seen what happens when your privacy is invaded, when what you do is tracked. Decades before the Stasi, the Gestapo had 40,000 spies watching over a country of over 80 million, committing the worst atrocities on civilians ever; this is what oppression is.

We have seen what happens when who talks to whom turns into a demagogic tragedy. McCarthyism was coined from recklessly slandering public figures, ruining the lives of hundreds of US citizen with unsubstantiated accusations; this is what repression is.

The amount of data gathered on people from any of the aforementioned organizations is infinitesimally small, when compared to the astronomically large, nearly incomprehensible amount of personal data gathered from your mobile phone in just one day.

Where you go is known with satellite-measured accuracy, within a meter of your position on earth. Polling every millisecond–even when offline, for later synchronizing–your exact location is recorded at every moment of every day, permanently. What floor you’re on, who you are near, how long you’re near them, what speed you’re traveling at, who you’re traveling with, are all elementary level mathematics to establish. Cross-linking a single data point like your longitude and latitude to a second data point like the radio distance to a cellular tower or three, adding in what Wi-Fi you connect to and the strength of connection, makes confirming your location in triplicate extremely easy.

What you do is matched against where you go, how long you are there, and how much you interact with your phone or health monitoring app. Knowing you’re at an event, bar, game, restaurant, hotel or friends house is matched against photos, videos, social media posts, chats, heart rate–or simply how often you look at your phone–and can determine what you’re doing with a remarkable degree of accuracy. Were you bored or engaged? Were you hungry, or did the salad you paid for suffice until the after-dinner pizza you had delivered late-night, after your ride-share (aka taxi+tracking) service dropped you off at 11:04pm?

Who talks with whom is egregiously recorded forever, and in nearly all cases what is said to whom is also flagrantly squirreled away for eternity. You chatting with your mother–yep, spied on. You texting your spouse–spied on. You calling to cancel cable–spied on. Your photo sent to your colleague–spied on. It’s easier to list all the things kept between just you and the intended recipient, because it is absolutely nothing. There is no app that can guarantee it’s just two people involved in a text string; because apps, the underlying operating systems, and the underlying cellular networks, are controlled by the very same groups that surveil all of society.

Your oppression is not entirely your fault; knowledge is purposefully and behaviorally restricted from your purview.

It’s either buried in the hundredth paragraph of a terms-of-service you didn’t read, or shrouded in enough mystery you follow the rest of the anchovies in a collective experiment wondering “if it is this bad, why hasn’t anybody stopped it?”

It takes any one of three things to solve this–as history has shown: governments regulating to benefit civilians; business models changing to respect society; people switching to products and services that are ethical for society. Surveillance companies are working daily to remove the last one from happening; people switching requires a network effect, and they put up anti-competitive barriers for any new competition to have a level playing field. These same companies–all Big Tech companies–are so gargantuan that they don’t have to change their business practices toward helping society; they opt to use marketing slogans to keep their oppressive regimes dominating instead.

This leaves governments to step in and consider regulating the behemoths–never forgetting that lobbying efforts will work hard to adding regulation that keeps the companies gigantic, rather than regulation that benefits civilians, since this type of regulation makes smaller but growing competition need to jump higher and higher to vault over the new regulatory hurdle.

To rid yourself of the unethical dossier collected on you takes having a (convenient) alternative that avoids knowing where people go, what people do, who talks to whom, all it takes is governments to stand up and regulate to benefit its civilians.

And most importantly, it takes you leading by example, using products designed to respect your rights.

The post Why the Total Dossier on Everybody Must Stop appeared first on Purism.

05 Sep 14:50

Confederation Trail Mud-Based Cycling Census

by peter@rukavina.net (Peter Rukavina)

Oliver and I concocted a grand plan for today that involved cycling up the Confederation Trail first thing this morning to visit Brett’s coffee stand, then continuing on to rendezvous with his support worker so that he could participate in the New Student Orientation activities on the UPEI campus.

The plan was almost thrown into disarray when the forecast called for rain; we were going to have to recoil to plan B and take the bus.

But then the sun came out, and we were able to cycle after all. And it was a grand morning for cycling, with the very most faint mist in the air and the Island as green as green can possibly be.

We did, indeed, enjoy a coffee, and a hot chocolate, at Caledonia House, and walked up to campus. And Oliver went off on his own.

I walked back to the Farmers’ Market to pick up my bike, and headed back down the trail to work. By this time the forecast rain had manifested in a more serious fashion, and gentle mist became somewhat-annoying-shower. But I persevered.

As I was cycling downtown I noticed that one gift brought on by the rain was that the tracks of every bicycle riding by were in evidence, providing a kind of mud-based cycle census:

Photo of the Confederation Trail showing bicycle tracks in the mud

There is better census data for the trail than what gets left in the mud: the 2018 report Charlottetown’s Active Transportation Network: Downtown Connectivity & Bike/Ped Volume Information.

The study collected data at several key points along the trail for 32 hours in September 2017 and reported:

  • Charlottetown Mall/Towers Mall
    • ~600 Confederation Trail users north of Towers Road and ~700 to the south
    • Roughly 150 trail users to/from the mall
    • Over 300 pedestrians and cyclists used the Towers Road (which doesn’t lanes or sidewalks)
    • Almost 250 people walked or biked between the Mall and Towers Road
    • Similar numbers of pedestrians both days (except for late morning spike on Friday)
    • Notably more cyclists on Saturday than Friday
  • UPEI
    • ~1600 Confederation Trail users at this location during the 2-day count
    • Almost 300 people to/from UPEI
    • 450 people used the trail to Mt. Edward Road
    • Slightly more pedestrians on Friday than on Saturday (ignoring the very high pedestrian numbers around 2PM on Friday)
    • Slightly more cyclists on Saturday than on Friday
    • At least a couple of pedestrians during almost every 15-minute interval, whereas bikes were reported during several intervals
  • Belvedere Avenue
    • ~770 Confederation Trail users north of Belvedere Ave and ~880 to the south
    • ~125 people accessed the Farmers Market from the Trail or Belvedere (would have been closed on Friday)
    • 75-80 people walking or biking in each direction along Belvedere Avenue
    • Similar numbers of pedestrians both days; 10 or more pedestrians during many of the 15-minute intervals
    • Higher cyclist volumes on Friday than on Saturday
      • At least 1 cyclist per interval between 6:30 AM and 7 PM
      • On Saturday, very few cyclists before 9:30 AM and after 6PM
  • Allen Street
    • Almost 1200 Confederation Trail users north of Allen Street and ~1100 the south
    • Significantly more people accessed the trail from Allen St. west vs. Allen east
    • About 400 pedestrians and cyclists using Allen St. sidewalks and bike lanes
    • Similar pedestrian numbers both days (ignoring the spike in pedestrian around 2PM on Friday)
    • Steady use of the Trail by cyclists both days
      • At least 2 cyclists during most 15-minute intervals from 10 AM to 6PM both days
      • Many intervals with 3-12 cyclists
  • Longworth Avenue
    • 1000-1100 Confederation Trail users to the east and west of Longworth Ave
    • Significantly higher pedestrian use of sidewalk on the west side of Longworth Ave than the one on the east side (600 vs. 150 south of the trail)
    • Steady and consistent pedestrian activity at this location
      • 10+ pedestrians during many 15-minute intervals, including a spike around 2PM Friday
      • Cyclist volumes of at least 1-2 during most intervals, topping out at 11 around 1PM Friday
05 Sep 14:50

Selling Books, One at a Time

by peter@rukavina.net (Peter Rukavina)

I am a subscriber to the excellent Notes from a Small Press email newsletter, by Anne Trubek, founder and director of Belt Publishing. I’m not sure how I found my way there; I suspect Robin Sloan may have been involved.

In this week’s edition of the newsletter, Anne writes about the backlist; in part:

The backlist—named after the place in a publisher catalog where the titles are listed—are books that were published at least six months earlier. Most sales for a book occur while they are on the frontlist (you probably can guess where that terms comes from), specifically during the first 90 days after publication date, though many of those sales occur in the month or two before publication, in those initial orders placed in anticipation. When a title is a frontlist, it’s costing publishers money and time: we are paying off printing bills (usually the highest single expense), publicity costs, marketing, designers, proofreaders—all those items that are figured into the P&L, or profit-and-loss statement. And we are thinking about how to do all those things better, and biting our nails. But after six months or so, that time and those costs subside, and the book moves to the backlist. All the costs have been paid off, and it consumes less of our mental energy. Which means each time we sell a copy of a backlist title, a much higher percentage of the revenue goes straight to our bottom line. One backlist sale equals about three frontlist ones in my mind.

Farther down she referenced the Belt book How to Live in Detroit Without Being a Jackass.

Excellent title.

I was curious, sought more information, and found:

Are you moving to Detroit because your rent is too high? Did you read somewhere that all you needed to buy a house was the change in your couch cushions? Are you terrified to live in a majority-black city? Welcome to Detroit! And welcome to the guidebook that you coastal transplants, wary suburbanites, unwitting gentrifiers, idealistic starter-uppers and curious onlookers desperately need. Now updated for 2018, How to Live In Detroit Without Being A Jackass offers advice on everything from how to buy and rehab a house to how not to sound like an uninformed racist. Let us help you avoid falling into the “jackass” trap and become the productive, healthy Detroiter you’ve always wanted to be.

That’s a book up my alley. So I bopped over to the website of The Bookmark, my local independent bookseller, and tried to order a copy. I was dismayed to find that the book was listed there as:

special order — may be slow to obtain — suppliers are waiting for stock

Concerned that something was amiss with the backlist pipes, I emailed Anne, and she quickly responded, advising me to check that the ISBN I’d ordered was 978-1948742313, which is for the more recent second edition.

It was not.

I’d ordered the first edition, which is, indeed, “slow to obtain.” But the second edition, The Bookmark’s website tells me, can be here in 13 days or less. So I switched out my order and I eagerly await my copy.

In the meantime, I followed another link from Anne’s newsletter to the newsletter Quoth the Raven, penned by Danny Caine, the proprietor of the Lawrence, Kansas The Raven Book Store. My first issue arrived in my inbox today; it is a collection of snippets from the book-selling floor, including:

Today we had a launch party for Sarah Henning’s Sea Witch. We ordered 75 books, which felt decadent, reckless. Selling even 50 of those would smash in-store event sales records. Last time we checked, the Facebook event had 40 people “going” and 120 “interested.” 100 people showed up, the store was stuffed and sweaty. The crowd spilled onto the sidewalk. We sold out of books with ten minutes to go before the event even started. The first two customers bought five and six books each, respectively, and I knew we were screwed in the best way.

I have long maintained that what’s next in retail is not the challenge of providing more choice, but rather the challenge of providing less choice.

The algorithms were supposed to solve this problem for us, but they’ve proved not up to the task, for they’ve no way of jumping over conceptual fences to recommend the unexpected.

Anne and Danny understand this, I think. I’m happy I found my way to them, and I look forward to having them narrow my choices.

05 Sep 14:50

The Enigma Machine

Jeremy Ashkenas, Observable, Sept 03, 2019
Icon

This is a lovely Notebook example of the Enigma Machine used to encode messages during the Second World War. It's written in Observable, described as the Javascript version of Jupyter Notebooks. What I really like is how it shows how a rotating physical device could be used for encryption. A lot of modern cryptography uses a similar approach, using modulo functions instead of a steel tube (think of it as doing mathematics with a clock). Here's the theory, in cartoon form. In cryptography, each rotor could be thought of as a (base 26 modulo) function, with a salt defining how far each rotor is turned after each input digit.

Web: [Direct Link] [This Post]
05 Sep 14:49

What Is a Tech Company?

by Ben Thompson

At first glance, WeWork and Peloton, which both released their S-1s in recent weeks, don’t have much in common: one company rents empty buildings and converts them into office space, and the other sells home fitness equipment and streaming classes. Both, though, have prompted the same question: is this a tech company?

Of course, it is fair to ask, “What isn’t a tech company?” Surely that is the endpoint of software eating the world; I think, though, to classify a company as a tech company because it utilizes software is just as unhelpful today as it would have been decades ago.

IBM and Tech-Centered Ecosystems

Fifty years ago, what is a tech company was an easy question to answer: IBM was the tech company, and everybody else was IBM’s customers. That may be a slight exaggeration, but not by much: IBM built the hardware (at that time the System/360), wrote the software, including the operating system and applications, and provided services, including training, ongoing maintenance, and custom line-of-business software.

All kinds of industries benefited from IBM’s technology, including financial services, large manufacturers, retailers, etc., and, of course, the military. Functions like accounting, resource management, and record-keeping automated and centralized activities that used to be done by hand, dramatically increasing the efficiency of existing activities and making new kinds of activities possible.

Increased efficiency and new business opportunities, though, didn’t make J.P. Morgan or General Electric or Sears tech companies. Technology simply became one piece of a greater whole. Yes, it was essential, but that essentialness exposed technology’s banality: companies were only differentiated to the extent they did not use computers, and then to the downside.

IBM, though, was different: every part of the company was about technology — indeed, IBM was an entire ecosystem onto itself: hardware, software, and services, all tied together with a subscription payment model strikingly similar to today’s dominant software-as-a-service approach. In short, being a tech company meant being IBM, which meant creating and participating in an ecosystem built around technology.

Venture Capital and Zero Marginal Costs

The story of IBM handing Microsoft the contract for the PC operating system and, by extension, the dominant position in computing for the next fifteen years, is a well-known one. The context for that decision, though, is best seen by the very different business model Microsoft pursued for its software.

What made subscriptions work for IBM was that the mainframe maker was offering the entire technological stack, and thus had reason to be in direct ongoing contact with its customers. In 1968, though, in an effort to escape an antitrust lawsuit from the federal government, IBM unbundled their hardware, software, and services. This created a new market for software, which was sold on a somewhat ad hoc basis; at the time software didn’t even have copyright protection.

Then, in 1980, Congress added “computer program” to the definition list of U.S. copyright law, and software licensing was born: now companies could maintain legal ownership of software and grant an effectively infinite number of licenses to individuals or corporations to use that software. Thus it was that Microsoft could charge for every copy of Windows or Visual Basic without needing to sell or service the underlying hardware it ran on.

This highlighted another critical factor that makes tech companies unique: the zero marginal cost nature of software. To be sure, this wasn’t a new concept: Silicon Valley received its name because silicon-based chips have similar characteristics; there are massive up-front costs to develop and build a working chip, but once built additional chips can be manufactured for basically nothing. It was this economic reality that gave rise to venture capital, which is about providing money ahead of a viable product for the chance at effectively infinite returns should the product and associated company be successful.

Indeed, this is why software companies have traditionally been so concentrated in Silicon Valley, and not, say, upstate New York, where IBM was located. William Shockley, one of the inventors of the transistor at Bell Labs, was originally from Palo Alto and wanted to take care of his ailing mother even as he was starting his own semiconductor company; eight of his researchers, known as the “traitorous eight”, would flee his tyrannical management to form Fairchild Semiconductor, the employees of which would go on to start over 65 new companies, including Intel.

It was Intel that set the model for venture capital in Silicon Valley, as Arthur Rock put in $10,000 of his own money and convinced his contacts to add an additional $2.5 million to get Intel off the ground; the company would IPO three years later for $8.225 million. Today the timelines are certainly longer but the idea is the same: raise money to start a company predicated on zero marginal costs, and, if you are successful, exit with an excellent return for shareholders. In other words, it is the venture capitalists that ensured software followed silicon, not the inherent nature of silicon itself.

To summarize: venture capitalist fund tech companies, which are characterized by a zero marginal cost component that allows for uncapped returns on investment.

Microsoft and Subscription Pricing

Probably the most overlooked and underrated era of tech history was the on-premises era dominated by software companies like Microsoft, Oracle, and SAP, and hardware from not only IBM but also Sun, HP, and later Dell. This era was characterized by a mix of up-front revenue for the original installation of hardware or software, plus ongoing services revenue. This model is hardly unique to software: lots of large machinery is sold on a similar basis.

The zero marginal cost nature of software, however, made it possible to cut out the up-front cost completely; Microsoft started pushing this model heavily to large enterprise in 2001 with version 6 of its Enterprise Agreement. Instead of paying for perpetual licenses for software that inevitably needed to be upgraded in a few years, enterprises could pay a monthly fee; this had the advantage of not only operationalizing former capital costs but also increasing flexibility. No longer would enterprises have to negotiate expensive “true-up” agreements if they grew; they were also protected on the downside if their workforce shrunk.

Microsoft, meanwhile, was able to convert its up-front software investment from a one-time payment to regular payments over time that were not only perpetual in nature (because to stop payment was to stop using the software, which wasn’t a viable option for most of Microsoft’s customers) but also more closely matched Microsoft’s own development schedule.

This wasn’t a new idea, as IBM had shown several decades earlier; moreover, it is worth pointing out that the entire function of depreciation when it comes to accounting is to properly attribute capital expenditures across the time periods those expenditures are leveraged. What made Microsoft’s approach unique, though, is that over time the product enterprises were paying for was improving. This is in direct contrast to a physical asset that deteriorates, or a traditional software support contract that is limited to a specific version.

Today this is the expectation for software generally: whatever you pay for today will be better in the future, not worse, and tech companies are increasingly organized around this idea of both constant improvements and constant revenue streams.

Salesforce and Cloud Computing

Still, Microsoft products had to actually be installed in the first place: much of the benefit of Enterprise Agreements accrued to companies that had already gone through that pain.

Salesforce, founded in 1999, sought to extend that same convenience to all companies: instead of having to go through long and painful installation processes that were inevitably buggy and over-budget, customers could simply access Salesforce on Salesforce’s own servers. The company branded it “No Software”, because software installations had such negative connotations, but in fact this was the ultimate expression of software. Now, instead of one copy of software replicated endlessly and distributed anywhere, Salesforce would simply run one piece of software and give anyone anywhere access to it. This did increase fixed costs — running servers and paying for bandwidth is expensive — but the increase was more than made up for by the decrease in upfront costs for customers.

This also increased the importance of scale for tech companies: now not only did the cost of software development need to be spread out over the greatest number of customers, so did the ongoing costs of building and running large centralized servers (of course Amazon operationalized these costs as well with AWS). That, though, became another characteristic of tech companies: scale not only pays the bills, it actually improves the service as large expenditures are leveraged across that many more customers.

Atlassian and Zero Transaction Costs

Still, Salesforce was still selling to large corporations. What has changed over the last ten years in particular is the rise of freemium and self-serve, but the origins of this model go back a decade earlier.

The early 2000s were a dire time in tech: the bubble had burst, and it was nearly impossible to raise money in Silicon Valley, much less anywhere else in the world — including Sydney, Australia. So, in 2001, when Scott Farquhar and Mike Cannon-Brookes, whose only goals was to make $35,000 a year and not have to wear a suit, couldn’t afford a sales force for the collaboration software they had developed called Jira they simply put it on the web for anyone to trial, with a payment form to unlock the full program.

This wasn’t necessarily new: “shareware” and “trialware” had existed since the 1980s, and were particularly popular for games, but Atlassian, thanks to being in the right place (selling Agile project management software) at the right time (the explosion of Agile as a development methodology) was using essentially the same model to sell into enterprise.

What made this possible was the combination of zero marginal costs (which meant that distributing software didn’t cost anything) and zero transaction costs: thanks to the web and rudimentary payment processors it was possible for Atlassian to sell to companies without ever talking to them. Indeed, for many years the only sales people Atlassian had were those tasked with reducing churn: all in-bound sales were self-serve.

This model, when combined with Salesforce’s cloud-based model (which Atlassian eventually moved to), is the foundation of today’s SaaS companies: customers can try out software with nothing more than an email address, and pay for it with nothing more than a credit card. This too is a characteristic of tech companies: free-to-try, and easy-to-buy, by anyone, from anywhere.

The Question of the Real World

So what about companies like WeWork and Peloton that interact with the real world? Note the centrality of software in all of these characteristics:

  • Software creates ecosystems.
  • Software has zero marginal costs.
  • Software improves over time.
  • Software offers infinite leverage.
  • Software enables zero transaction costs.

The question of whether companies are tech companies, then, depends on how much of their business is governed by software’s unique characteristics, and how much is limited by real world factors. Consider Netflix, a company that both competes with traditional television and movie companies yet is also considered a tech company:

  • There is no real software-created ecosystem.
  • Netflix shows are delivered at zero marginal costs without the need to pay distributors (although bandwidth bills are significant).
  • Netflix’s product improves over time.
  • Netflix is able to serve the entire world because of software, giving them far more leverage than much of their competition.
  • Netflix can transact with anyone with a self-serve model.

Netflix checks four of the five boxes.

Airbnb, which has yet to go public, is also often thought of as a tech company, even though they deal with lodging:

  • There is a software-created ecosystem of hosts and renters.
  • While Airbnb’s accounting suggests that its revenue has minimal marginal costs, a holistic view of Airbnb’s market shows that the company effectively pays hosts 86 percent of total revenue: the price of an “asset-lite” model is that real world costs dominate in terms of the overall transaction.
  • Airbnb’s platform improves over time.
  • Airbnb is able to serve the entire world, giving it maximum leverage.
  • Airbnb can transact with anyone with a self-serve model.

Uber, meanwhile, has long been mentioned in the same breath as Airbnb, and for good reason: it checks most of the same boxes:

  • There is a software-created ecosystem of drivers and riders.
  • Like Airbnb, Uber reports its revenue as if it has low marginal costs, but a holistic view of rides shows that the company pays drivers around 80 percent of total revenue; this isn’t a world of zero marginal costs.
  • Uber’s platform improves over time.
  • Uber is able to serve the entire world, giving it maximum leverage.
  • Uber can transact with anyone with a self-serve model.

A major question about Uber concerns transaction costs: bringing and keeping drivers on the platform is very expensive. This doesn’t mean that Uber isn’t a tech company, but it does underscore the degree to which its model is dependent on factors that don’t have zero costs attached to them.

Now for the two companies with which I opened the article. First, WeWork (which I wrote about here and here):

  • WeWork claims it has a software-created ecosystem that connect companies and employees across locations, but it is difficult to find evidence that this is a driving factor for WeWork’s business.
  • WeWork pays a huge percentage of its revenue in rent.
  • WeWork’s offering certainly has the potential to improve over time.
  • WeWork is limited by the number of locations it builds out.
  • WeWork requires a consultation for even a one-person rental, and relies heavily on brokers for larger businesses.

Frankly, it is hard to see how WeWork is a tech company in any way.

Finally Peloton (which I wrote about here):

  • Peloton does have social network-type qualities, as well as strong gamification.
  • While Peloton is available as just an app, the full experience requires a four-figure investment in a bike or treadmill; that, needless to say, is not a zero marginal cost offering. The service itself, though, is zero marginal cost.
  • Peloton’s product improves over time.
  • The size, weight, and installation requirements for Peloton’s hardware mean the company is limited to the United States and the just-added United Kingdom and Germany.
  • Peloton has a high-touch installation process

Peloton is also iffy as far these five factors go, but then again, so is Apple: software-differentiated hardware is in many respects its own category. And, there is one more definition that is worth highlighting.

Peloton and Disruption

The term “technology” is an old one, far older than Silicon Valley. It means anything that helps us produce things more efficiently, and it is what drives human progress. In that respect, all successful companies, at least in a free market, are tech companies: they do something more efficiently than anyone else, on whatever product vector matters to their customers.

To that end, technology is best understood with qualifiers, and one of the most useful sets comes from Clayton Christensen and The Innovator’s Dilemma:

Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character…

Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

Sustaining technologies make existing firms better, but it doesn’t change the competitive landscape. By extension, if adopting technology simply strengthens your current business, as opposed to making it uniquely possible, you are not a tech company. That, for example, is why IBM’s customers were no more tech companies than are users of the most modern SaaS applications.

Disruptive technologies, though, make something possible that wasn’t previously, or at a price point that wasn’t viable. This is where Peloton earns the “tech company” label from me: compared to spin classes at a dedicated gym, Peloton is cheap, and it scales far better. Sure, looking at a screen isn’t as good as being in the same room with an instructor and other cyclists, but it is massively more convenient and opens the market to a completely new customer base. Moreover, it scales in a way a gym never could: classes are held once and available forever on-demand; the company has not only digitized space but also time, thanks to technology. This is a tech company.

This definition also applies to Netflix, Airbnb, and Uber; all digitized something essential to their competitors, whether it be time or trust. I’m not sure, though, that it applies to WeWork: to the extent the company is unique it seems to rely primarily on unprecedented access to capital. That may be enough, but it does not mean WeWork is a tech company.

And, on the flipside, being a tech company does not guarantee success: the curse of tech companies is that while they generate massive value, capturing that value is extremely difficult. Here Peloton’s hardware is, like Apple’s, a significant advantage.

On the other hand, asset-lite models, like ride-sharing, are very attractive, but can Uber capture sufficient value to make a profit? What will Airbnb’s numbers look like when it finally IPOs? Indeed, the primary reason Peloton’s numbers look good is because they are selling physical products, differentiated by software, at a massive profit!

Still, definitions are helpful, even if they are not predictive. Software is used by all companies, but it completely transforms tech companies and should reshape consideration of their long-term upside — and downside.

I wrote a follow-up to this article in this Daily Update.

05 Sep 14:49

Art and Improvements at Joyce-Collingwood Station

by Gordon Price

Bob Ransford got it right: the public art piece – ‘Off Centre’ by artist Renee Van Halm – is at the Joyce-Collingwood Station.

It’s a small but colourful piece of the just-completed station upgrade funded in the blandly named TransLink Maintenance and Repair Program – a $200 million program of 70 projects that have been rolling out since 2016.

As these small and large improvements continue, it feels like a golden age of renewal for TransLink, reflected not only in physical changes but also in additional capacity and ease of use.  Like these, as reported in The Sun:

On Tuesday, 24 new Skytrain cars will increase capacity by five per cent on Expo Line and nine per cent on the Millennium Line during peak periods.

As well, commuters can expect more frequency on 12 key bus routes with the addition of 40,000 service hours. On Seabus, sailings are being increased to every 10 minutes during peak periods. …

The regional transportation authority has implemented a new artificial intelligence algorithm that improved the accuracy of bus departure estimates by 74 per cent during a pilot project.

It can even seem excessive:

When headways are every two minutes on a Sunday afternoon, passengers don’t really need a schedule.  But hey, it shows they care.

Let’s remember this as we reflect back on the 2015 referendum – a totally cynical move by the BC Liberals, which delayed the inevitable funding and cost millions, only serving to demonstrate how easy it is to trash government if you make the price visible.  The Liberals have barely acknowledged (and never apologized) for imposing that referendum on the region.

The least they could do now is to recognize how TransLink has improved, helped shape the region, and is more necessary than ever.

05 Sep 14:48

A Little Story About Bugs and Myself on the App Store

Bug Bytes: Gus Mueller:

In his 23 years coding for the Mac, Gus Mueller, creator of the popular image editor Acorn, has had ample opportunity to make errors. Here, he talks about his most memorable one, which goes back to VoodooPad, an advanced journaling app Mueller released in 2003.

This was a fun little writeup about what was probably my worst bug ever. Or at least, the one that gave me a giant panic attack when it happened. I can still remember the dread when composing an email to the customer explaining that the data was gone. I felt awful. But then there was the surprise happy ending and I felt like I had dodged a bullet.

The link above will give a little preview to the story, but to read the whole thing, you'll need to follow the link to the Mac App Store.

05 Sep 14:47

Android 10 launches today

by Volker Weber

Pixel users start your engines and smash that update button. All others, later.

More >

05 Sep 14:46

Gamergate

Revised 9/9/19 to clarify Eric Corbett’s political affinities.

Gamergate was the template for the villainy of modern social media. Here’s a great NPR interview on Gamergate today.

It’s still simmering. At Wikipedia, an unbannable editor, “Eric Corbett,” was banned over the weekend. In a previous version of this note, I described him as “right-wing (or at least anti-Feminist)”; Corbett, formerly a notable prolific Wikipedia, suggested by email that I describe him as “an equal-opportunity abuser.” Corbett continued:

You may have been duped by those I categorised as ‘militant feminists’ - and I'm sure you know who I'm referring to - but I have always supported equal rights and opportunities regardless of gender. My dispute with your gender gap heroes was quite simply over my distrust of the figures produced by the WMF purporting to claim that only something like 10–15% of editors are female, which the WMF have themselves been forced to admit is a figure they pretty much made up.

Corbett became notorious when, in an argument over the encyclopedia’s gender gap task force, he told one of the leader’s of the task force that “the easiest way to avoid being called a cunt is not to act like one.” Wikipedia bent over backwards to avoid a sanction then.

When opponents of women in computer games showed up for Gamergate’s attack on Wikipedia, the arbitration committee banned every feminist they were ask to sanction, and only the feminists. (One throwaway right-wing account was banned, and its author decamped to a write a regular column at Breitbart. The editor behind another Gamergate account got drunk one night and got into a fight with a policeman; the resulting jail term disrupted their editing.)

Like clockwork, right-wing Wikipedia is now out to “balance” Eric Corbett’s ban by ridding themselves of PeterTheFourth, one of the few remaining people who are not right-wing extremists and who are willing to edit Gamergate pages. Meanwhile, Wikipedia's coverage drift’s further and further to the right, from a constant years-long edit war to smear Margaret Sanger to exercising extraordinary care to avoid covering last week’s game-industry meltdown in which several prominent game-industry figures were accused of rape, assault, and creepiness by their former colleagues.

05 Sep 14:46

Twitter Favorites: [Sean_YYZ] I love the cartoon (as it rightly puts Downie into Canadian iconography), but it really only applies to Anglo-Canad… https://t.co/pVmfaksXmh

Sean Marshall @Sean_YYZ
I love the cartoon (as it rightly puts Downie into Canadian iconography), but it really only applies to Anglo-Canad… twitter.com/i/web/status/1…
05 Sep 14:45

Announcing the PureBoot Bundle: Tamper-evident Firmware from the Factory

by Kyle Rankin

We have been promoting the benefits of our PureBoot tamper-evident firmware with a Librem Key for some time, but until now our laptops have shipped with standard coreboot firmware, that didn’t include tamper-evident features. To get tamper-evident features, you had to reflash your Librem laptop with PureBoot firmware after the fact, using our standard firmware update process. One of the biggest challenges for most people using PureBoot was the initial setup process–but many people might  find installing an OS challenging too.

The best way to solve this challenge is for us to do the setup for you–and that’s what we are happy to announce today.

While we will still default to our standard coreboot firmware, starting today, if you order a Librem laptop and select the “PureBoot Bundle” option for the firmware, you can choose to have PureBoot installed and configured at the factory. The PureBoot Bundle includes a Librem Key, as well as a “Vault” USB drive that will contain the GPG public key we generated at the factory. You can use the Vault drive later to store backups of GPG keys you generate and store them in a safe place.

With the PureBoot Bundle, you will be able to detect firmware tampering and rootkits out of the box! Just unbox the laptop, plug in the Librem Key and turn it on–if the Librem Key blinks green, your laptop is safe; if it blinks red, it was tampered with in transit. Also, now that our Librem Keys are made in the USA next to our fulfillment center, we have even tighter control over the supply chain for the most critical trusted component in this equation.

If you pick a PureBoot Bundle, we will perform the following additional steps on top of the standard PureOS install process

  • Reflash the firmware with PureBoot
  • Factory-reset the Librem Key and set default user and admin PINs
  • Generate a new, unique GPG key on the Librem Key
  • Copy the corresponding GPG public key to a USB flash drive shipped with the laptop
  • Sign all of the files in /boot with this GPG key
  • Add the GPG public key to the firmware’s GPG keyring and reflash the firmware
  • Reset the TPM and set a default admin PIN
  • Store the known-good firmware measurements in the TPM
  • Share a secret in the TPM and Librem Key to detect later tampering

When you get your PureBoot Bundle, you can immediately test whether the firmware was tampered with during shipment. For an additional charge, you can contact us about our anti-interdiction services which, among other measures, ships the Librem laptop and Librem Key separately.

We believe you should have full control over your keys

Once you have verified the integrity of the firmware, you can set new passwords and secrets on the Librem Key and TPM, generate new GPG keys (or copy over GPG keys you already have), and re-sign all of the files, all with keys under your control, at any time.

We hope that, by setting it up for you at the factory, we can get this next-generation tamper-detection technology into more customers’ hands. Everyone–not just hardcore geeks–deserves the peace of mind of knowing that their systems are safe from tampering; and unlike with other secure boot systems, PureBoot gives you tamper-evident firmware without vendor lock-in–you control all of the keys.

To get the PureBoot Bundle, order a Librem 13 or Librem 15 and on the configuration page in the shop, select “PureBoot Bundle” under the firmware option.

The post Announcing the PureBoot Bundle: Tamper-evident Firmware from the Factory appeared first on Purism.

05 Sep 14:45

The search for what to do about dog poop, Vancouver edition

by Frances Bula

Engineers at Metro Vancouver tell me that they get asked about this topic by reporters more often than any other issue, including sea-level rise or drugs in sewage.

What to do about the by-products of the tens of thousands of dogs in the region seems to endless fascinate people. The breaking on news on this? Vancouver is putting out an official request for innovative solutions. My story in the Globe here. Full text also posted below.

Naturally, we’re not the only city concerned about this. It’s been suggested that Toronto condo owners have their pets’ DNA tested to find out who is behind poop problems there, while there’s also a search for solutions in Ontario generally.

In the meantime, still no word on what to do about all the cat poop (it can’t have the same treatment as dog stuff because of the toxoplasmosis in cat feces) and the goose poop (one of my faithful readers was more concerned about that the dogs, according to an email I got today) in the city.

By the way, you may all thank me now for my refusal to include any terrible dog-related puns in my story, which is something that seems to be a near-fatal affliction among newsies.

The unleashed-dog area next to Vancouver’s Olympic Village is a hub of busyness on a warm Labour Day weekend, owners watching their pets sprint and jump against a backdrop of downtown condo towers.

The red bins next to the park get steady business, as those owners drop off what have become ubiquitous accessories to dog ownership in the city: plastic bags filled with their dogs’ biological waste.

Most of the owners don’t know about the labour-intensive process that follows those plastic bags of dog poop. The City of Vancouver pays contractors to carry out the unlovely job of slicing the bags open, mixing the feces with water and then delivering the liquid result to the region’s sewer-treatment system – after throwing out the bags, which are not biodegradable no matter what some manufacturers claim.

“Ugh,” is the response from dog owner Justin Lee when he hears about that procedure, after depositing a bag on behalf of his British bulldog, Bubba, at the park.

“I didn’t know that. I didn’t realize those bags aren’t compostable.”

But anyone working in the business of collecting solid waste – the region’s cities and the Metro Vancouver regional district – does.

And now they are also hunting for a better way to handle the thousands of tonnes of dog waste that are deposited throughout the Lower Mainland every year, on the ground, in regular garbage bins and all kinds of other places that are problematic.

That’s what prompted the City of Vancouver to issue a call recently (deadline, mid-September) for anyone with suggestions for better methods, from worm composting to regular composting to a new program to persuade dog owners to take their bags home, cut them open themselves and dump the contents into their own toilets. Or any other innovative method.

“We’re definitely looking for what are the other solutions out there,” says Jon McDermott, who oversees the city’s red-bin dog waste pilot project.

At the moment, there are red bins in just eight parks in Vancouver, the result of a test that started three years ago to see if dog owners would use them instead of dumping dog waste into the regular garbage stream.

STORY CONTINUES BELOW ADVERTISEMENT

It turns out they will use them, with the result that the city now collects 25 tonnes a year of poop-filled bags from the red bins – paying contractor Scooby’s Dog Waste Removal Service $40,000 a year to dispose of them.

But the city, prompted by a motion from Councillor Sarah Kirby-Yung, is looking at extending some kind of dog waste program to all of its 300-plus parks, which would mean a huge expansion in poop tonnage from the city’s estimated 32,000 to 55,000 dogs. Bag slicing is too primitive and expensive for that, Mr. McDermott says.

The city’s initiative is being welcomed by others dealing with the same issue.

“I’m really hoping the city will come up with innovative solutions,” says Karen Storry, Metro Vancouver’s senior project engineer in the solid waste division, which ends up dealing with the byproducts of 350,000 dogs directly in regional parks and indirectly in the sewage system. “This may be an opportunity for the right entrepreneur to come up with something.”

Dog waste is a major issue in the region. Feces left on the street gets washed into storm drains and eventually end up in streams, rivers and the ocean. It’s also a problem in landfills. It’s a “concentrated producer of greenhouse gases when it breaks down,” the city’s call-for-information document notes.

It can’t go into city green bins because the region’s commercial composters have been adamant that it’s too problematic for them to process. And it’s not the best idea to put animal feces, which contain pathogens, in backyard composters, Ms. Storry says, even though that’s sometimes proposed as a solution.

“Then you’re managing this material that has risks and that maybe gets put on vegetables, or you have people putting dirt in their mouths.”

Metro Vancouver staff have tried a number of experiments already in their parks: doggy sandboxes with a scented pole in the middle to attract the animals to do their business in that area (dogs didn’t like it); in-ground sewer tanks (didn’t work logistically, as it required people to travel to a single tank, when so many regional parks consist of long, winding trails), and worm composting (a lot of work).

They initiated the red-bin idea in their parks, where across the region more than 100 tonnes of dog waste get deposited every year.

Ms. Storry says getting owners to put dog waste in their toilets would be a good idea, except for her fear that too many owners have been tricked into believing those special dog waste plastic bags are compostable. That’s a worry that her fellow Metro Vancouver engineer Linda Parkinson shares.

“My concern would be the rise of those ‘flushable’ bags,” says Ms. Parkinson, who works in the wastewater division, with some frustration. (The situation is of such concern that an environmental group has filed a complaint with Canada’s Competition Bureau about the mislabelling of those kinds of products.)

Ms. Storry says, whatever the solution ultimately is, it has to be easy for owners. “This isn’t their favourite part of dog ownership, so they want a simple solution.”

But Mr. Lee, who works as a longshoreman in Delta, B.C., is more optimistic about dog owners than their tolerance for handling dog poop.

“As owners, we spend time picking it up so we’re used to it. I think you would find more dog owners would do it than not. And we could help out by doing it that way.”

05 Sep 14:44

Notion.io is my new favorite - nathanb



xtabber wrote:
Notion just got a very unfavorable review in PC Magazine:
>
>https://www.pcmag.com/review/370368/notion
>

I'm really trying not to be a fanboy here but I got a good chuckle out of the summary.

From 'cons'- "Few options for customizing views or organizing content."

That tells me right there that the reviewer just doesn't get it. That he doesn't think notion is good at precisely what its core competency is made me stop reading the rest.

If anything, Notion's ability to mix wiki, database, and outliner (with transclusion!!!) makes it hard to use only because you have endless organization possibilities instead of being forced to organize one way... Notion knows this and that is why they have a ton of templates to make it easier to grok the potential. The reviewer didn't seem to like the abundance of those templates either.

I've been enjoying most the critiques on this site as there are definitely real cons to notion but this review is like saying Excel is bad at math. I'm annoyed that this writer gets paid for this and Paul J Miller produces superior content as a side hobby.

05 Sep 14:43

Going Narrower

by Richard Millington

If a community isn’t reaching critical mass, you might be tempted to expand its scope, drive more people into it, and hope it takes off.

Alas, it never works.

If you can’t engage the members you have, you’re unlikely to engage the members you don’t have.

A better approach is to go narrower. Reduce the scope of the community. Focus on a more niche target audience or topic. Better understand the unique needs of a smaller, more connected, group of members better. Figure out exactly what they need, desire, and what identity they share.

Then start building out a community from there.

If it’s not working, it doesn’t make sense to go broader. Try going narrower instead.

05 Sep 14:43

Huawei without Google

by Volker Weber
Huawei has said that it will hold an event in Munich on Sept. 19 to unveil its new flagship model, the Mate 30. But at the event, Huawei may not be able to say when it will actually start selling the Mate 30 in Europe and other overseas markets, employees familiar with the situation said. Huawei still is trying to figure out how to address the problem of missing Google services, the employees said.

More >

05 Sep 14:38

Twitter Favorites: [Planta] The latest from @nardwuar, talking to Jagmeet Singh is so fascinating to watch. He is so gifted at drawing Singh ou… https://t.co/179VugZlED

Joseph Planta @Planta
The latest from @nardwuar, talking to Jagmeet Singh is so fascinating to watch. He is so gifted at drawing Singh ou… twitter.com/i/web/status/1…
05 Sep 14:38

From my inbox

by Volker Weber

f1d25342c738f0594459a524bd5d987c

Es geht Schlag auf Schlag. Der Amazon-Kurier kennt mich schon. Vielen lieben Dank! Da kann man ja gar nicht anders als schnell genesen.

05 Sep 14:38

Notion.io is my new favorite - Chris Thompson

There are definitely some valid criticisms in that review, but many of the points are ridiculous, e.g., the claim that Evernote has more customizable views than Notion.

I think the criticisms of the editor are largely off-base. Notion's editor is extremely well designed and in my opinion probably the best compromise between Markdown-style and arbitrary formatting-style apps, in the sense that it's WYSIWYG but also easy to draft documents with actual structure, and easy to add structure later by transforming blocks if you start by just writing down a bunch of quick text-only notes. (I wish the same kind of editor was available as a standalone desktop tool or an Emacs mode, actually.)

It can be hard to figure out how to set it up though, which is the usual Achilles heel of most powerful information managers. I think they're aware of this and clearly trying to straddle a fine line between user-friendliness and power.

xtabber wrote:
Notion just got a very unfavorable review in PC Magazine:
>
>https://www.pcmag.com/review/370368/notion
>
05 Sep 14:38

Useful and not so useful Statistics

by Nathan Yau

Hannah Fry, for The New Yorker, describes the puzzle of Statistics to analyze general patterns used to make decisions for individuals:

There is so much that, on an individual level, we don’t know: why some people can smoke and avoid lung cancer; why one identical twin will remain healthy while the other develops a disease like A.L.S.; why some otherwise similar children flourish at school while others flounder. Despite the grand promises of Big Data, uncertainty remains so abundant that specific human lives remain boundlessly unpredictable. Perhaps the most successful prediction engine of the Big Data era, at least in financial terms, is the Amazon recommendation algorithm. It’s a gigantic statistical machine worth a huge sum to the company. Also, it’s wrong most of the time.

Be sure to read this one. I especially liked the examples used to explain statistical concepts that sometimes feel mechanical in stat 101.

Tags: Hannah Fry, New Yorker, uncertainty

05 Sep 14:38

Acquia acquires Cohesion to simplify building Drupal sites

by Dries

I'm excited to announce that Acquia has acquired Cohesion, the creator of DX8, a software-as-a-service (SaaS) visual Drupal website builder made for marketers and designers. With Cohesion DX8, users can create and design Drupal websites without having to write PHP, HTML or CSS, or know how a Drupal theme works. Instead, you can create designs, layouts and pages using a drag-and-drop user interface.

Amazon founder and CEO Jeff Bezos is often asked to predict what the future will be like in 10 years. One time, he famously answered that predictions are the wrong way to go about business strategy. Bezos said that the secret to business success is to focus on the things that will not change. By focusing on those things that won't change, you know that all the time, effort and money you invest today is still going to be paying you dividends 10 years from now. For Amazon's e-commerce business, he knows that in the next decade people will still want faster shipping and lower shipping costs.

As I wrote in a recent blog post, no-code and low-code website building solutions have had an increasing impact on the web since the early 1990s. While the no-code and low-code trend has been a 25-year long trend, I believe we're only at the beginning. There is no doubt in my mind that 10 years from today, we'll still be working on making website building faster and easier.

Acquia's acquisition of Cohesion is a direct response to this trend, empowering marketers, content authors and designers to build Drupal websites faster and cheaper than ever. This is big news for Drupal as it will lower the cost of ownership and accelerate the pace of website development. For example, if you are still on Drupal 7, and are looking to migrate to Drupal 8, I'd take a close look at Cohesion DX8. It could accelerate your Drupal 8 migration and reduce its cost.

Here is a quick look at some of my favorite features:

An animated GIF showing how to edit styles with Cohesion.
An easy-to-use “style builder” enables designers to create templates from within the browser. The image illustrates how easy it is to modify styles, in this case a button design.
An animated GIF showing how to edit a page with Cohesion.
In-context editing makes it really easy to modify content on the page and even change the layout from one column to two columns and see the results immediately.

I'm personally excited to work with the Cohesion team on unlocking the power of Drupal for more organizations worldwide. I'll share more about Cohesion DX8's progress in the coming months. In the meantime, welcome to the team, Cohesion!

05 Sep 14:38

"I wish it to be known that I am not pursuing any friendships at the moment because I cannot think of..."

“I wish it to be known that I am not pursuing any friendships at the moment because I cannot...
05 Sep 14:32

Open source funding and risks

Read the whole thing: Recap of the funding experiment by Feross Aboukhadijeh.

Unfortunately, when open source people say things like...

Maintainers do critical work which enables companies to create billions of dollars in value, yet we capture none of that value for ourselves.

Does it have to be like this?

I’m not arguing that maintainers should start capturing all of the value that we create. But we shouldn’t capture literally none of the value either. The status quo is not tenable.

I would love to find a way to help maintainers capture at least a bit of the value we create so that we can happily continue to write new features, fix bugs, answer user questions, improve documentation, and release innovative new software.

...what people who use open source in business are hearing is more like...

We're getting a lot of software value for nothing! Fist bump!

A simple appeal to do the right thing is not something that, as a downstream user, you can put in your budget.

When you use under-funded open source software, there is always a risk that if the maintainer doesn't get paid, they will either burn out or go get a high-intensity job and let their project fall on the floor. Can you justify paying open source maintainers in order to protect yourself from this risk?

That's a little more promising, but two areas need to be addressed.

  • Is the risk quantified? I can measure a software project's value to me, but not the probability of the maintainer quitting in the absence of support, so I don't know the total size of the risk. If I can't quantify the risk, I can't justify spending to avoid it.

  • Can I measure the benefit of participating? I don't know how much my choice to fund the project reduces the risk. I could put in my $100, see that the developer still can't live on that, and end up incurring just as much cost to replace the open-source dependency as if I had not invested.

IMHO we need better market design in order to deal with those two problems. I personally think that models based on dominant assurance contracts and/or futures markets are promising (more on that later) but just banning an interesting idea after its first deployment is counterproductive.

05 Sep 14:24

USB launches new ‘USB4’ spec that’s basically just Thunderbolt 3

by Jonathan Lamont
USB4

Following months of feedback, the USB Implementers Forum (USB-IF) published the official USB4 spec — and like its predecessor, USB4 will have confusing names.

USB-IF initially announced the new rules for the Universal Serial Bus standard in March. After a period of review, the new spec is now official with new speeds and other features that basically make it the same as Intel’s Thunderbolt 3.

To start, USB4 will retain the Type-C connector. It’ll also continue to handle charging with the Power Delivery standard.

USB4 will also double its two-lane data throughput to 40Gbps on certified cables. On top of that, it can handle multiple display link-ups with up to two simultaneous 4K 60fps feeds. Finally, USB4 brings PCIe support for external GPUs.

Additionally, USB4 will retain backwards compatibility with USB 3.2 (which includes USB 3.1 Gen 1, USB 3.1 Gen 2, USB 3.1 and USB 3.0), as well as USB 2.0. Plus, USB4 will essentially be cross-compatible with Thunderbolt 3 devices.

While that should be an excellent change for consumers — less confusion between Thunderbolt and USB — USB4 will prove confusing in other areas. According to a TechRepublic report, there will be at least two speed tiers for USB4: 20Gbps or 40Gbps. USB-IF will reportedly call the tiers USB4 Gen 2×2 and USB4 Gen 3×2 respectively.

The naming convention follows after USB 3.2, which offered USB 3.2 Gen 2×2 for 20Gbps and USB 3.2 Gen 1×2 for 10Gbps throughput. Unfortunately, the naming convention is downright confusing, even if it’s consistent.

USB-IF says manufacturers that use the USB standard will be able to learn more about USB4 at upcoming USB Developer Days conferences in the fall. However, consumers likely won’t see USB4 products until late 2020 at the earliest.

Source: USB-IF, (2) Via: TechRepublic, Android Police

The post USB launches new ‘USB4’ spec that’s basically just Thunderbolt 3 appeared first on MobileSyrup.