Shared posts

19 Oct 15:34

Samsung opens new artificial intelligence centre in Montreal

by Bradly Shankar
Samsung logo

Samsung has announced that it is opening a new artificial intelligence (AI) in Montreal.

This is the South Korean tech giant’s second AI centre to be opened in Montreal, following the establishment of its first AI hub in the city in September 2017. Samsung also opened an AI centre in Toronto back in May.

According to Samsung, the new Montreal centre will “allow Samsung to expand its outpost for industry collaboration and talent recruitment” in Canada. The company says the centre will serve as a hub for the development of “core AI technologies that entail machine learning, language, vision and other multi-modal interactions.”

Samsung says it decided to double its efforts in Montreal due to the city being a source of “key AI talent,” housing 250 researchers and 9000 university students in related programs. Specifically, Samsung noted that there are leading researchers at the nearby McGill University and the University of Montreal who have longstanding relationships with the company. Additionally, Samsung says Greater Montreal provides an opportunity to better collaborate with regional start-ups and expand the current ecosystem.

“By leveraging the power of AI in Samsung’s products and services, we must focus on creating new values, never seen nor experienced before,” said Seunghwan Cho, executive vice president of Samsung Research, in a press statement. “To do this, seven Global AI Centres, including the Montreal AI Centre, will play a pivotal role.”

“Montreal is a vibrant and creative city with well-known academic institutions and some of the highest standards in quality of life allowing us to attract some of the best talents around the world,” added Montreal mayor Valerie Plante in the same press release.

“Samsung’s AI Centre reinforces Montreal’s unique position and reputation as a global leader in the field of artificial intelligence. Their work with faculties, students, and the broader academic community highlights the city’s international reach and the strengths of a business ecosystem that is firmly focused on collaboration, research and innovation.”

Samsung’s other North American AI centres are located in New York and Silicon Valley. The company also has global AI centres operating in Korea, Russia and the UK. Samsung is increasing its AI centre outreach as part of a larger goal to add AI functionality to all of its products and services by 2020. Earlier this year, the company also announced a $29 billion CAD investment in AI, 5G and automotive initiatives.

Source: Samsung

The post Samsung opens new artificial intelligence centre in Montreal appeared first on MobileSyrup.

19 Oct 15:34

Skills Inventory

by Mike Rieser

A team is asked to take on a special project but they feel uneasy because they lack all of the skills necessary for a successful outcome.

Have you experienced that problem before? Perhaps a Skills Inventory could help.

This is a skills inventory I helped a team create in the past (I’ve redacted their names for anonymity):

Tabular representation of skills, people and their corresponding strength.
Example Skills Inventory


There were two teams that would be pioneering the inclusion of some new technologies into their website. They needed to make the site responsive/adaptive, older Java technologies like Struts 1.1 were preventing the move to HTML5 and were going to be removed and replaced with mostly Spring technologies. They wanted to use some of the practices of TDD, BDD, and automated UI testing. Even the old Ant-based build system was being replaced with Gradle. The mood was pretty gloomy and weighed down with so many things to learn and the number of unknowns they had. It felt more like they had been set up to fail than to succeed. They had taken a basic Spring course but hadn’t applied any of it on a project yet. I suggested we create a Skills Inventory and create a plan for how they would handle so many of the things they needed to learn. After we finished the atmosphere was notably lighter. Everyone knew there was work ahead, but they felt like they had an approach to conquer it.


Here is a checklist for how to create a Skills Inventory. I’ll elaborate each step below:

  • Get the entire team together
  • Make safety a prerequisite
  • List the new/additional skills or technologies
  • Explain the scale
  • Team members self-rate (can be done in parallel)
  • Find volunteers to “scout” ahead of the team and bring back learning
  • Make everyone aware of who the “experts” they have in the room (and outside) are

Get the entire team together

I’ve only ever done this together as a whole team. It’s important to have the whole team present. A team often knows more than it thinks it does. Team members have additional skills or knowledge that the team as a whole doesn’t know about. You want these to be discovered.

Make Safety a Prerequisite

A team in a situation like this one can feel a little edgy.  When emotions are running high, it helps to make alternatives and options as visible as possible and assess them objectively.

Depending on the organization and environment, it could be a bit sensitive to ask the team members to rate themselves. In reality, I haven’t had a problem so far as it has already been recognized that they have no expertise in these areas, and this is an attempt to do something to help. So, this step moves along quickly, but I always introduce it and ask if everyone is comfortable with it.  By the way, I don’t include things which they already know. This was a Java shop, I didn’t list Java.

List the new/additional skills or technologies

This can be gathered ahead of time or done at the meeting. Create a list of the notable skills, practices or technologies which the team has concern about or are lacking for the mission at hand.  

Explain the scale

Personally, I hate reinventing the wheel. I’d rather use an already validated mechanism than try to create my own.  In the field of software development skills assessment, I use the Seven Stages of Expertise in Software Engineering by Meilir Page-Jones. It’s a 7-point scale. I summarize it for the team this way:

Expertise Scale

  1. Innocent – may have heard of it
  2. Exposed – can use it correctly in a sentence
  3. Apprentice – has tried it, or taken a class
  4. Practitioner – has used it successfully on a project
  5. Journeyman – uses it all the time, or on multiple projects, can mentor
  6. Master – teaches it, transcends rules
  7. Researcher – writes books, conferences, etc.

Team members self-rate

Let the team members self-rate. In the photo, I put everyone’s name at the top of a column and they filled in the table for their name. With enough dry-erase markers, this can be done in parallel.  Whenever self-assessment comes into play, it’s good to consider the Dunning-Kruger Effect (Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments).

For scores of 1 through 3, we’ll be treating them all the same (as unskilled). For the scores of 4 and 5, we are really asking about whether you’ve completed a project successfully with the item (4), or whether you use it all the time or completed multiple projects with it (5).  So rather than asking “are you an expert,” we’re simply asking, “how frequently have you used it.” For ratings of 5 or higher I treat them as a 5 (skilled). The risk is believing you have an expert on the team when you don’t.  This should surface rather quickly, or create backup scouts in the next step.

Find volunteers to “scout” ahead of the team and bring back learning

Once the skill inventory is completed, it should give the team information to decide how to tackle their problem.

The approach I’ve taken is to ask for volunteers to be the “scout” for a particular item.  A scout’s job is to take point and scout out ahead of the team and spend time learning their particular item.  Often a few hours spent by a scout can be distilled into a few minutes that are relevant in the context of the team’s application.  Try to route all questions through the scout and if they don’t have an answer and someone else tracks it down, let them know the answers too as they’re discovered.  You’ll be growing an expert this way. Eventually, everyone should level out and gain expertise in everything needed, but while your team is still mostly in the dark this strategy will help them make progress. With this divide and conquer strategy each scout takes the hit on learning a different item and brings back the distilled ideas to spread across the team.

In the picture above, the first and fifth columns were Tech Leads on their teams.  The second tech lead loved new stuff and volunteered to take on quite a bit. A (5) on the table indicated the team already had a skilled person in that area, indicating them as a scout wasn't really necessary, sorry for the inconsistency.  The sixth column was a developer new to the team and somewhat shy. The reason for duplicates was they wanted a scout on each team.

A quick note about Mob Programming

I used this technique before I learned about mob programming.  I’m now a huge fan of mob programming, and would highly recommend it.  Among other things it has a huge leveling effect for getting the expertise from one (or a few) team member’s head spread throughout the whole team.  It lifts those without expertise very quickly, and it can’t be beat for onboarding new members. Although when no one in the room has a clue about something, it’s not a stellar experience to all read a tutorial together and smacks of duplicated learning.  In a mobbing situation, it’s best to have an expert (stage 5 or above) to come mob with the team to transfer knowledge. In the absence of an outside expert, I’d still have scouts concentrate on their particular topic when outside the mob, so they can share it when in the mob.

Make everyone aware of who the “experts” they have in the room (and outside) are

On the completed table I circled the intersection of a person (column) and an item (row) to indicate that person volunteered to be the scout for that item.  It is helpful to also list “outside experts,” skilled people outside of the team, who could be used as a resource when needed. It’s good to have your scout along when talking with an outside expert.


Having the necessary skills is critical for success, teams are ideally equipped with stage-5 team members.  If you don’t have it, you’ll need to cultivate the expertise. When birds fly in a V-formation the lead bird has it the hardest, but its efforts make it easier for the rest of the flock. Until everyone has gained the necessary expertise, and rather than everyone having to struggle with everything all at once, it’s helpful to have volunteers each take point and focus on an area and share with each other what they’ve learned.


Seven Stages of Expertise in Software Engineering by Meilir Page-Jones

Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments by Justin Kruger and David Dunning

The post Skills Inventory appeared first on Industrial Logic.

19 Oct 15:34

Pixelmator Pro 1.2 Quicksilver out now

by Pixelmator Team

Pixelmator Pro 1.2 Quicksilver has just been released, adding full macOS Mojave support, a beautiful new light appearance, ML Enhance, batch processing via Automator, and a whole lot more. Everyone at the Pixelmator Team is really excited about this update, so we just wanted to tell you guys about what we’ve added and why we think it’s great.

Light appearance, refreshed dark appearance, and macOS Mojave support

We all love macOS Mojave and the new Dark Mode. And in Pixelmator Pro 1.2, the interface has now been beautifully redesigned to fit right in alongside native Mac apps, like Safari and Keynote, in Dark Mode. We’ve also added support for accent colors, so the colors of various sliders and buttons in Pixelmator Pro will change to match your preferences. No less than you’d expect from the ultimate Mac app.

Of course, now that there’s a Dark Mode, there has to be a Light Mode too. So it was the perfect time for us to add a light appearance to Pixelmator Pro. It’s something we’d been thinking about for a long time and the new light appearance brings the classic Mac app look and feel to Pixelmator Pro. You can always change appearance in Pixelmator Pro preferences and even have it update automatically according to your System Preferences.

ML Enhance

The new ML Enhance feature lets you automatically enhance photos using a Core ML-powered machine learning algorithm trained on 20 million professional photos. The idea behind ML Enhance is to balance the exposure, correct white balance, and improve individual color ranges in a photo to give you the best starting point for making your own creative edits. All in all, ML Enhance intelligently fine-tunes a total of 37 color adjustments, so all you need to do is add your own finishing touches. See it in action in the video below.

Batch processing via Automator

Automator support has been a pretty popular request over the past few months and Pixelmator Pro now has five powerful Automator actions for you to use. So you can make complex edits to many images at once without ever opening Pixelmator Pro! We’ll be releasing a tutorial about this in the coming days, which will also touch upon Automator basics. If you’ve never tried out Automator, stay tuned for the tutorial and get ready to pick up some new and useful Mac skills!

  • Auto White Balance Images

  • Auto Enhance Images

  • Apply Color Adjustments to Images

  • Apply Effects to Images

  • Change Type of Images

What’s next

As ever, we’re always dreaming up what else to bring to Pixelmator Pro and we’re actually already working on the next few updates — we can’t wait to make Pixelmator Pro even better. For now, we really hope you love this update and if you have any comments or feedback, we’d love to hear it here, on Twitter and Facebook, via email, on the Mac App Store, and, well, pretty much anywhere else you might want to share it with us. As always, the update is free for existing users and it’s available for you to download from the Mac App Store today.

P.S. Pixelmator Pro is still on sale for another week (until Friday 26th), so if you haven’t bought it yet, now’s a great time.

Download now

19 Oct 15:34

Redesigning a website using CSS Grid and Flexbox

by Dries

For the last 15 years, I've been using floats for laying out a web pages on This approach to layout involves a lot of trial and error, including hours of fiddling with widths, max-widths, margins, absolute positioning, and the occasional calc() function.

I recently decided it was time to redesign my site, and decided to go all-in on CSS Grid and Flexbox. I had never used them before but was surprised by how easy they were to use. After all these years, we finally have a good CSS layout system that eliminates all the trial-and-error.

I don't usually post tutorials on my blog, but decided to make an exception.

What is our basic design?

The overall layout of the homepage for is shown below. The page consists of two sections: a header and a main content area. For the header, I use CSS Flexbox to position the site name next to the navigation. For the main content area, I use CSS Grid Layout to lay out the article across 7 columns.

Css page layout

Creating a basic responsive header with Flexbox

Flexbox stands for the Flexible Box Module and allows you to manage "one-dimensional layouts". Let me further explain that by using an real example.

Defining a flex container

First, we define a simple page header in HTML:

Site title

To turn this in to a Flexbox layout, simply give the container the following CSS property:

#header {
  display: flex;

By setting the display property to flex, the #header element becomes a flex container, and its direct children become flex items.

Css flexbox container vs items

Setting the flex container's flow

The flex container can now determine how the items are laid out:

#header {
  display: flex;
  flex-direction: row;

flex-direction: row; will place all the elements in a single row:

Css flexbox direction row

And flex-direction: column; will place all the elements in a single column:

Css flexbox direction column

This is what we mean with a "one-dimensional layout". We can lay things out horizontally (row) or vertically (column), but not both at the same time.

Aligning a flex item

#header {
  display: flex;
  flex-direction: row;
  justify-content: space-between;

Finally, the justify-content property is used to horizontally align or distribute the Flexbox items in their flex container. Different values exist but justify-content: space-between will maximize the space between the site name and navigation. Different values exist such as flex-start, space-between, center, and more.

Css flexbox justify content

Making a Flexbox container responsive

Thanks to Flexbox, making the navigation responsive is easy. We can change the flow of the items in the container using only a single line of CSS. To make the items flow differently, all we need to do is change or overwrite the flex-direction property.

Css flexbox direction row vs column

To stack the navigation below the site name on a smaller device, simply change the direction of the flex container using a media query:

@media all and (max-width: 900px) {
  #header {
    flex-direction: column;

On devices that are less than 900 pixels wide, the menu will be rendered as follows:

Css flexbox direction column

Flexbox make it really easy to build responsive layouts. I hope you can see why I prefer using it over floats.

Laying out articles with CSS Grid

Flexbox deals with layouts in one dimension at the time ― either as a row or as a column. This is in contrast to CSS Grid Layout, which allows you to use rows and columns at the same time. In this next section, I'll explain how I use CSS Grid to make the layout of my articles more interesting.

Css grid layout

For our example, we'll use the following HTML code:


Lorem ipsum dolor sit amet

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt.

Some meta data Some meta data Some meta data

Simply put, CSS Grid Layout allows you to define columns and rows. Those columns and rows make up a grid, much like an Excel spreadsheet or an HTML table. Elements can be placed onto the grid. You can place an element in a specific cell, or an element can span multiple cells across different rows and different columns.

We apply a grid layout to the entire article and give it 7 columns:

article {
  display: grid;
  grid-template-columns: 1fr 200px 10px minmax(320px, 640px) 10px 200px 1fr;

The first statement, display: grid, sets the article to be a grid container.

The second statement grid-template-columns defines the different columns in our grid. In our example, we define a grid with seven columns. The middle column is defined as minmax(320px, 640px), and will hold the main content of the article. minmax(320px, 640px) means that the column can stretch from 320 pixels to 640 pixels, which helps to make it responsive.

On each side of the main content section there are three columns. Column 3 and column 5 provide a 10 pixel padding. Column 2 and columns 6 are defined to be 200 pixels wide and can be used for metadata or for allowing an image to extend beyond the width of the main content.

The outer columns are defined as 1fr, and act as margins as well. 1fr stands for fraction or fractional unit. The width of the factional units is computed by the browser. The browser will take the space that is left after what is taken by the fixed-width columns and divide it by the number of fractional units. In this case we defined two fractional units, one for each of the two outer columns. The two outer columns will be equal in size and make sure that the article is centered on the page. If the browser is 1440 pixels wide, the fixed columns will take up 1020 pixels (640 + 10 + 10 + 180 + 180). This means there is 420 pixels left (1440 - 1020). Because we defined two fractional units, column 1 and column 2 should be 210 pixels wide each (420 divided by 2).

Css grid layout columns

While we have to explicitly declare the columns, we don't have to define the rows. The CSS Grid Layout system will automatically create a row for each direct sibling of our grid container article.

Css grid layout rows

Now we have the grid defined, we have to assign content elements to their location in the grid. By default, the CSS Grid Layout system has a flow model; it will automatically assign content to the next open grid cell. Most likely, you'll want to explicitly define where the content goes:

article > * {
  grid-column: 4 / -4;

The code snippet above makes sure that all elements that are a direct sibling of article start at the 4th column line of the grid and end at the 4th column line from the end. To understand that syntax, I have to explain you the concept of column lines or grid lines:

Css grid layout column lines

By using grid-column: 4 / -4, all elements will be displayed in the "main column" between column line 4 and -4.

However, we want to overwrite that default for some of the content elements. For example, we might want to show metadata next to the content or we might want images to be wider. This is where CSS Grid Layout really shines. To make our image take up the entire width we'll just tell it span from the first to the last column line:

article > figure {
  grid-column: 1 / -1;

To put the metadata left from the main content, we write:

#main article > footer {
  grid-column: 2 / 3;
  grid-row: 2 / 4;
Css grid layout placement

I hope you enjoyed reading this tutorial and that you are encouraged to give Flexbox and Grid Layouts a try in your next project.

19 Oct 15:33

Friction-Free Racism

by Chris Gilliard

Once upon a time I intended to do a project about the ways that black folks experienced how they were perceived in America. It would document those times when something happens and you say to yourself, “Wow, in the eyes of some folks, I’m still just a black person.” Every black person I’ve ever met has had at least one of these moments, and often several, even after they became “successful.” My title for the project was “Still a Negro.”

The idea was partly inspired by the years when I taught at a university in Detroit in a summer outreach program for kids, mainly from the city’s middle and high schools who were considering an engineering career. Normally I taught in the liberal arts building, but the outreach program was in the slightly less broken-down engineering building across campus. Often I would go over early to use the faculty copier there, and every year I taught in the program — eight years total — someone from the engineering faculty would reprimand me for using the copier. I knew it would happen, and yet each time I was surprised.

Only the most mundane uses of biometrics and facial recognition are concerned with only identifying a specific person, matching a name to a face or using a face to unlock a phone

Reprimand is probably not a strong enough word to cover the variety of glares, stares, and accusations I elicited. I guess it’s possible that they didn’t recognize me as one of their colleagues, but what pissed me off was that their reactions suggested that they didn’t believe I could be: They gaped at me, accused me of lying, demanded proof that I really was faculty, or announced that they’re not going to argue with a student. Simple courtesy should have prevented such behavior. Since my own behavior was not out of place, it could only mean that I looked out of place. And then, for the ten millionth time, I would say to myself, “Oh my god! I forgot that I’m still a Negro.”

The fact that this happened in the engineering department was not lost on me. Questions about the inclusivity of engineering and computer science departments have been going on for quite some time. Several current “innovations” coming out of these fields, many rooted in facial recognition, are indicative of how scientific racism has long been embedded in apparently neutral attempts to measure people — a “new” spin on age-old notions of phrenology and biological determinism, updated with digital capabilities.

Only the most mundane uses of biometrics and facial recognition are concerned with only identifying a specific person, matching a name to a face or using a face to unlock a phone. Typically these systems are invested in taking the extra steps of assigning a subject to an identity category in terms of race, ethnicity, gender, sexuality, and matching those categories with guesses about emotions, intentions, relationships, and character to shore up forms of discrimination, both judicial and economic.

Certainly the practice of coding difference onto bodies is not new; determining who belongs in what categories — think about battles over citizenship, the “one drop” rule of blackness, or discussions about how to categorize trans people — are longstanding historical, legal, and social projects made more real and “effective” by whatever technologies are available at the time. As Simone Browne catalogs in Dark Matters, her groundbreaking work on the historical and present-day surveillance of blackness, anthropometry

was introduced in 1883 by Alphonse Bertillon as a system of measuring and then cataloguing the human body by distinguishing one individual from another for the purposes of identification, classification, and criminal forensics. This early biometric information technology was put to work as a ‘scientific method’ alongside the pseudo-sciences of craniotometry (the measurement of the skull to assign criminality and intelligence to race and gender) and phrenology (attributing mental abilities to the shape of the skull, as the skull was believed to hold a brain made up of the individual organs).

A key to Browne’s book is her detailed look at the way that black bodies have consistently been surveilled in America: The technologies change, but the process remains the same. Browne identifies contemporary practices like facial recognition as digital epidermalization: “the exercise of power cast by the disembodied gaze of certain surveillance technologies (for example, identity card and e-passport verification machines) that can be employed to do the work of alienating the subject by producing a ‘truth’ about the body and one’s identity (or identities) despite the subject’s claims.”

Iterations of these technologies are already being used in airports, at borders, in stadiums, and in shopping malls — not just in countries like China but in the United States. A number of new companies, including Faception, NTechLab, and BIOPAC systems, are advancing the centuries-old project of phrenology, making the claim that machine learning can detect discrete physical features and make data-driven predictions about the race, ethnicity, sexuality, gender, emotional state, propensity for violence, or character of those who possess them.

Many current digital platforms proceed according to the same process of writing difference onto bodies through a process of data extraction and then using “code” to define who is what.  Such acts of biometric determinism fit with what has been called surveillance capitalism, defined by Shoshanna Zuboff as “the monetization of free behavioral data acquired through surveillance and sold to entities with interest in your future behavior.” Facebook’s use of “ethnic affinity” as a proxy for race is a prime example. The platform’s interface does not offer users a way to self-identify according to race, but advertisers can nonetheless target people based on Facebook’s ascription of an “affinity” along racial lines. In other words, race is deployed as an externally assigned category for purposes of commercial exploitation and social control, not part of self-generated identity for reasons of personal expression. The ability to define one’s self and tell one’s own stories is central to being human and how one relates to others; platforms’ ascribing identity through data undermines both.

At the same time racism and othering are rendered at the level of code, so certain users can feel innocent and not complicit in it

These code-derived identities in turn complement Silicon Valley’s pursuit of “friction-free” interactions, interfaces, and applications in which a user doesn’t have to talk to people, listen to them, engage with them, or even see them. From this point of view, personal interactions are not vital but inherently messy, and presupposed difference (in terms of race, class, and ethnicity) is held responsible. Platforms then promise to manage the “messiness” of relationships by reducing them to transactions. The apps and interfaces create an environment where interactions can happen without people having to make any effort to understand or know each other. This is a guiding principle of services ranging from Uber, to Amazon Go grocery stores, to touchscreen-ordering kiosks at fast-food joints. At the same time racism and othering are rendered at the level of code, so certain users can feel innocent and not complicit in it.

In an essay for the engineering bulletin IEEE Technology and Society, anthropologist Sally Applin discusses how Uber “streamlined” the traditional taxi ride:

They did this in part by disrupting dispatch labor (replacing the people who are not critical to the actual job of driving the car with a software service and the labor of the passenger), removing the language and cultural barriers of communicating directly with drivers (often from other countries and cultures), and shifting traditional taxi radio communications to the internet. [Emphasis added]

In other words, interacting with the driver is perceived as a main source of “friction,” and Uber is experienced as “seamless” because it effaces those interactions.

But this expectation of seamlessness can intensify the way users interpret difference as a pretext for a discount or a bad rating. As the authors of “Discriminating Tastes: Uber’s Customer Ratings as Vehicles for Workplace Discrimination” point out:

Because the Uber system is designed and marketed as a seamless experience (Uber Newsroom, 2015), and coupled with confusion over what driver ratings are for, any friction during a ride can cause passengers to channel their frustrations

In online markets, consumer behavior often exhibits bias based on the perceived race of another party to the exchange. This bias often manifests via lower offer prices and decreased response rates … More recently, a study of Airbnb … found that guests with African–American names were about 16 percent less likely to be accepted as rentees than guests with characteristically white names.

Ghettotracker,” which purported to identify neighborhoods to avoid, and other apps like it ( SafeRoute, Road Buddy) are further extensions of the same data-coding and “friction”-eliminating logic. These apps allow for discrimination against marginalized communities by encoding racist articulations of what constitutes danger and criminality. In effect, they extend the logic of policies like “broken windows” and layer a cute interface on top of it.

Given the primacy of Google Maps and the push for smart cities, what happens when these technologies are combined in or across large platforms? Even the Netflix algorithm has been critiqued for primarily only offering “Black” films to certain groups of people. What happens when the stakes are higher? Once products and, more important, people are coded as having certain preferences and tendencies, the feedback loops of algorithmic systems will work to reinforce these often flawed and discriminatory assumptions. The presupposed problem of difference will become even more entrenched, the chasms between people will widen.

At its root then, surveillance capitalism and its efficiencies ease “friction” through dehumanization on the basis of race, class, and gender identity. Its implementation in platforms might be categorized as what, in Racial Formation in the United States, Michael Omi and Howard Winant describe as “racial projects”: “simultaneously an interpretation, representation, or explanation of racial dynamics and an effort to reorganize and redistribute resources along particular racial lines.”

The impulse to find mathematical models that will not only accurately represent reality but also predict the future forces us all into a paradox. It should be obvious that no degree of measurement, whether done by calipers or facial recognition, can accurately determine an individual’s identity independent of the social, historical, and cultural elements that have informed identity. Identification technologies are rooted in the history of how our society codes difference, and they have proved a profitable means of sustaining the regimes grounded in the resulting hierarchies. Because companies and governments are so heavily invested in these tools, critics are often left to call for better representation and more heavily regulated tools (rather than their abolishment) to at least “eliminate bias” to the extent that fewer innocent people will be tagged, detained, arrested, scrutinized, or even killed.

Frank Pasquale, in “When Machine Learning Is Facially Invalid,” articulates this well: The vision of better “facial inference projects” through more complete datasets “rests on a naively scientistic perspective on social affairs,” but “reflexivity (the effect of social science on the social reality it purports to merely understand) compromises any effort (however well intended) to straightforwardly apply natural science methods to human beings.”

There is no complete map of difference, and there can never be. That said, I want to indulge proponents of these kinds of tech for a moment. What would it look like to be constantly coded as different in a hyper-surveilled society — one where there was large-scale deployment of surveillant technologies with persistent “digital epidermalization” writing identity on to every body within the scope of its gaze? I’m thinking of a not too distant future where not only businesses and law enforcement constantly deploy this technology, as with recent developments in China, but also where citizens going about their day use it as well, wearing some version of Google Glass or Snapchat Spectacles to avoid interpersonal “friction” and identify the “others” who do or don’t belong in a space at a glance. What if Permit Patty or Pool Patrol Paul had immediate, real-time access to technologies that “legitimized” black bodies in a particular space?

I don’t ask this question lightly. Proponents of persistent surveillance articulate some form of this question often and conclude that a more surveillant society is a safer one. My answer is quite different. We have seen on many occasions that more and better surveillance doesn’t equal more equitable or just outcomes, and often results in the discrimination being blamed on the algorithm. Further, these technological solutions can render the bias invisible. While not based on biometrics, think about the difference between determining “old-fashioned” housing discrimination vs. how Facebook can be used to digitally redline users by making ads for housing visible only to white users.

But I’d like to take it a step further. What would it mean for those “still a negro” moments to become primarily digital — invisible to the surveilled yet visible to the people performing the surveillance? Would those being watched and identified become safer? Would interactions become more seamless? Would I no longer be rudely confronted while making copies?

The answer, on all counts, is no — and not just because these technologies cannot form some complete map of a person, their character, and their intent. While it would be inaccurate to say that I as a Black man embrace the “still a negro” moments, my experiencing them gives me important information about how I’m perceived in a particular space and even to what degree I’m safe in that space. The end game of a surveillance society, from the perspective of those being watched, is to be subjected to whims of black-boxed code extended to the navigation of spaces, which are systematically stripped of important social and cultural clues. The personalized surveillance tech, meanwhile, will not make people less racist; it will make them more comfortable and protected in their racism.

19 Oct 15:33


by Volker Weber

iphone-xr-red-select-201809 AV3

19 Oct 15:33

Bridging vs. Bonding

by Richard Millington

A bridge provides a connection across a chasm.

You and your team can be the bridge between your organization and the community.

You can shuttle (and filter) information from one group to the next. You can pass and filter information from one group to the next.

Most community professionals are the bridge between their brand and the organization.

Bonding is different.

Bonding removes the chasm by interlinking the two. The deeper the links, the stronger the bond.

You’re bonding when you persuade employees to test their marketing ideas in the community, source case studies from members, and get developers to host AMA chats directly with community members.

You’re bonding when you persuade members to provide direct feedback to engineers, have voting rights on key brand decisions, and invite members in to meet the staff and CEO.

The problem with being the bridge, is your colleagues don’t get direct experience of the community. They don’t see the enthusiasm, the remarkable contributions, and the same opportunities you do.

The problem with building bonds is it takes time, persuasion, and subtle nudging to gradually get more colleagues to see the community as an objective which directly supports their work.

At the moment, we have far too many bridges and far too few bonds.

19 Oct 15:32

Twitter Favorites: [counti8] I’m eating a mini burger made with a pineapple bun and it is BLOWING my sweet-salty mind, folks. (Which is really…

Karen Quinn Fung | 馮皓珍 @counti8
I’m eating a mini burger made with a pineapple bun and it is BLOWING my sweet-salty mind, folks. (Which is really……
19 Oct 15:32

$39 Apple Watch USB-C magnetic charging cable now available in Canada

by Igor Bonifacic
Apple Watch Series 4 hanging

Alongside the iPhone XR, which Canadian consumers can pre-order starting today, Apple has launched a new Apple Watch magnetic charging cable.

The new cable features a USB-C connection, allowing owners of the company’s recent computers, including the MacBook Pro and MacBook, to charge their Watch without a USB-A to USB-C adapter.

According to the Apple online store, the cable will be available to purchase at the company’s physical retail locations as early as October 24th. Deliveries, likewise, will start arriving on October 24th.

Source: Apple Via: iPhone in Canada

The post $39 Apple Watch USB-C magnetic charging cable now available in Canada appeared first on MobileSyrup.

19 Oct 15:31

Demnächst in diesem Theater :: Surface Go

by Volker Weber

Annotation (1)

Die ganzen Surface-Go-Tests sind mir zu wischiwaschi. Ich will wissen, ob ich damit immer und ohne Einschränkung arbeiten könnte. Nicht irgendein abstrakter Nutzer mit einem vorgestellten Profil, sondern ich selbst. Für einen Selbstversuch wird Microsoft mir ein Go leihen, sehr bald. Und dann schaun wir mal. Ich braucht etwa einen Tag, um alle Workflows von meinem Surface Pro rüberzuziehen, aber dann sollte ich relativ schnell produktiv sein. Die spannende Frage ist nicht der Bildschirm, da ich alles sowieso im Tablet Mode fahre. Das heißt keine überlappenden Fenster, keine Fensterrahmen. Spannend wird vor allem die sehr kleine Tastatur. Das wird vermutlich der Knackpunkt sein. Um Prozessor und Speicher mache ich mir weniger Gedanken.

Das wird ein Test aus reiner Neugier. Gewinnen kann ich mit einem Go nicht gegen ein Pro i7/16/512. Aber vielleicht ist weniger doch mehr.

19 Oct 15:31

It looks like the new iPad Pro really will ditch Lightning for USB-C: rumour

by Patrick O'Rourke

While rumours regarding Apple’s upcoming iPad Pro revision nixing the traditional Lightning port in favour of USB-C have been circulating for some time now, a new report from Japanese publication Mac Otakara once again backs up this claim.

According to information the publication gathered from accessory manufacturers at the Global Source Mobile Electronics Trade Fair in Hong Kong, there is significant “talk” at the show related to Apple making the jump from Lightning to USB-C with the new iPad Pro.

Analyst Ming-Chi Kuo, who is typically reliable, stated back in September that he expects Apple’s 2018 iPad Pro models to utilize USB-C instead of Lightning. Further, Kuo said that the port will open up new functionality, including the ability to connect the iPad Pro to a 4K monitor, a first for Apple’s iPad line.

Mac Otakara goes on to state that manufacturers at the trade show cited a photo of the 2018 iPad Pro’s dimensions. The image indicates that the new iPad Pro is set to measure in at 7-inches wide (178.52mm) and 9.7-inches tall (247.64mm), with the larger model coming in at 8.5-inches wide and (215mm) and 11-inches tall (280.66mm).

The new iPad Pro models will also feature reduced bezels that are approximately 6mm across all sides of the device. It’s still unclear if the new iPad Pro will feature an in-display notch like the iPhone XS to accommodate the device’s Face ID module.

Apple is expected to reveal its 2018 iPad Pro along with a new MacBook Air at the company’s upcoming fall hardware event on October 30th.

Image credit: Mac Otakara

Source: Mac Otakara

The post It looks like the new iPad Pro really will ditch Lightning for USB-C: rumour appeared first on MobileSyrup.

19 Oct 15:30

Tesla’s new Model 3 trim option costs $58,800 before incentives

by Brad Bennett
Tesla Model 3 on road

Electric car manufacturer Tesla’s entry-level vehicle, the Model 3, is now on sale for a base price of $58,800 CAD before incentives.

The Model 3 was announced as a consumer-level electric car that would start at $35,000 USD (roughly, $45,843 CAD). Tesla has still been unable to hit that lower price point, but the $58,000 trim is a step in the right direction.

Tesla CEO Elon Musk announced the new car option on through an October 18th, 2018 tweet.

The entry-level vehicle trim has a mid-range engine and a maximum range of 418 kilometres. The new trim also has a top speed of 201 kilometres-per-hour.

The car costs approximately $52,100 with incentives and rebates, but those electric vehicle rebates are only available in Quebec and British Columbia.

Musk followed up his tweet with another that said “Tesla rear-wheel drive cars do actually work well on snow and ice. We did our traction testing on an ice lake!”

Musk’s comments are reassuring for anyone wondering how a rear-wheel drive Model 3 would stand up to a Canadian winter.

The Model 3 is also available in a dual motor all-wheel drive trim, with a range of 499km. However, the cost of the base model all-wheel drive trim before incentives is $70,700 CAD.

Customers looking to buy a 499 km range Model 3 with rear-wheel drive instead can order one “off menu for another week or so,” according to a tweet from Musk.

Source: Tesla Via: Engadget

The post Tesla’s new Model 3 trim option costs $58,800 before incentives appeared first on MobileSyrup.

18 Oct 22:29

Somebody else’s book party

by Josh Bernoff

I am one of the most egocentric people you ever met, because I am so talented. (Modest, not so much.) It’s always been this way. So how did it feel to participate in a somebody else’s party for a book I helped write? More than a year ago, Dave Frankland and Nick Worth asked for … Continued

The post Somebody else’s book party appeared first on without bullshit.

18 Oct 22:29

✚ Data Graphics Workflow

by Nathan Yau

As I worked on a wide range of charts recently, I got to thinking about workflow. How does one get from dataset to finished data graphic? This is my process. Read More

18 Oct 22:29

Samsung Galaxy Book 2

by Volker Weber

Samsung Galaxy Book 2

The Galaxy Book did not really stick with me. Fantastic screen, but terrible folio keyboard. This time Samsung tried harder to copy the Microsoft Surface Pro. It's running on Snapdragon 850, which should could give it excellent battery performance, but it won't run the full suite of Windows apps.

18 Oct 22:29

E-Learning 3.0

This presentation explores the impact of the next wave of learning technologies emerging as a consequence of the significant and substantial changes coming to the World Wide Web. Online Learning 2018, Toronto, Ontario (Lecture) Oct 18, 2018 [Comment]
18 Oct 22:29

The Phone in the Crows Nest

by (Peter Rukavina)

On the third floor of the Sagendorph Building here at Yankee Publishing, at the end of the hall near where my temporary office sits, is a room called, internally, the “crow’s nest.” Many of my meetings this week have been hosted there, and in previous years I’ve set up temporary office there; in an earlier incarnation it was the office of my friend and colleague the late John Pierce.

I can walk out of the Crow’s Nest and unto the roof of the building and appear on The Old Farmer’s Almanac webcam; here’s me doing just that (the image is tilted because the camera got tilted by construction workers):

Me on the Roof of The Old Farmer's Almanac

I noticed this morning that the phone in the room is labelled “Crows Nst,” which takes its unofficial name into officialdom. I love this.

Phone at Yankee Publishing showing Crow's Nest on the phone identifying its location.

18 Oct 22:28

Apple Announces October 30th Event

by John Voorhees

As first reported by Neil Cybart, Apple has announced a media event for October 30, 2018 at 10:00 am Eastern. The event will be held at the Brooklyn Academy of Music, Howard Gilman Opera House.

To announce the event, Apple sent invitations to members of the press with a wide variety of artistic renderings of the Apple logo. The designs are also used on Apple's event website. Here's a sampling collected by Sebasiaan de With on Twitter:

Based on rumors that have circulated for months, Apple is expected to introduce new iPad Pros with edge-to-edge screens and Face ID and external display support. There has also been speculation of a redesigned Apple Pencil that will pair with the new iPads using proximity sensing technology and a redesigned magnetic connector on the back of the iPad. It’s possible Apple might use the event to introduce the long-expected AirPower charging mat and new Mac hardware too.

Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it's also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it's made in Italy.

Join Now
18 Oct 22:28

Tweetbot 5 for iOS Brings a Redesign, Dedicated GIPHY Support, and a New Dark Mode

by John Voorhees

Tweetbot 5 for iOS is out with a new look that more closely resembles the latest Mac version, which was redesigned in May. Tapbots has also added a handful of additional features, some of which mirror additions to the Mac version and others of which are unique to iOS.

Before you even launch Tweetbot, you’ll notice that the icon has been changed to reflect the style of the Mac app. After years of variations on the old icon, it’s taken some getting used to the new one, which I think looks angry, but whether you like the new design or not, I like having the same icon on both platforms.

A comparison of Tweetbot 4's timeline and profile views to the new ones in Tweetbot 5.

A comparison of Tweetbot 4's timeline and profile views to the new ones in Tweetbot 5.

The iconography throughout Tweetbot 5 uses a thicker stroke like the Mac app, which gives the icons more weight and a bolder look. When you open a tweet’s detail view, Tweetbot now indicates whether the person whose tweet you're viewing follows you, though it doesn’t do the same for other tweets if the one you opened is part of a thread.

Profiles have been redesigned too. Avatars are bigger and to the left of your bio, profile backgrounds are now clearly visible, the text is left-justified, and follower and following statistics are more prominent. The changes take up more vertical space but are easier to read and look nicer than in the past. Lastly in the area of design, there’s a new dark mode throughout the app that looks much better on OLED iPhones than Tweetbot 4’s dark mode.

Tweetbot 5's tweet detail view (right) shows off the new dark mode, bolder icons, and auto-play GIFs.

Tweetbot 5's tweet detail view (right) shows off the new dark mode, bolder icons, and auto-play GIFs.

In addition to the redesign, Tapbots has added a few new features to Tweetbot 5 too. Like the Mac app, Tweetbot on iOS can now autoplay GIFs and video. The feature is turned on by default but can be turned off in settings.

GIFs are easier to add to tweets too. There’s a GIF button in the compose view that opens a GIPHY search field when tapped. Descriptions can be added to images from the action sheet that opens when you tap on an image in the tweet you’re composing, which is an excellent accessibility addition for VoiceOver users. Tweetbot has added haptic feedback throughout the app too.

Finally, there is a new tip jar feature in Tweetbot’s settings that lets users make a $0.99, $2.99, or $4.99 tip to support the development of the app, which is something that Twitterrific started doing a couple of years ago.

From a feature standpoint, Tweetbot 5 is a modest update, but one that I’ve enjoyed using during the beta. The dark mode looks much better on OLED phones, and the design changes give the app a fresh new feel. I wasn’t sure I’d like the addition of auto-play video, but I’ve been pleasantly surprised to find I do, though it’s easy enough to turn off if you don’t. The haptic feedback is another good addition that provides a subtle tactile response to actions taken in the app.

Tweetbot 5 is a free update for existing users and is available on the App Store.

Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it's also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it's made in Italy.

Join Now
18 Oct 22:28

Slanted Slates — 2018 Vancouver Civic Election

by Ken Ohrn

Had enough of earnest plodding efforts to sort the electoral gold from the heaps of ballot dross?

Is it still not coming together for you?  And tick-tock; the voting deadline is getting closer.  What’s a person to do?

How about this collection, by Christopher Porter, of slates for every intention? All you do is tap into your definitive inner vision for the future of the city — and then pick a prefab matching slate.

Couldn’t be simpler.

My fave is “No City for White Men”, reflecting the remarkable opportunity to elect a pretty credible council (including mayor) completely from the other 75+ percent of our citizenry.

18 Oct 22:28

LineageOS on a Samsung Galaxy S9

by Martin

Over the years, I’ve used LineageOS and its predecessor CyanogenMod on my personal devices such as the Samsung Galaxy S4 and, for the past few years, on the Samsung S5. But my S5 is aging so I was looking for a replacement. There is a LineageOS port for the Galaxy S6 but it seems to be quite experimental, I kept having power drain issues due to some system task starting to run wild after a few days. There’s also a version for the Galaxy S7 which is already kind of hard to get on the market but a version for the S8 is missing. The more surprised I was to see that there is a version for the relatively recent Galaxy S9.

This might have something to do that the Galaxy S9 is one of the first devices that is compliant with Google’s Project Treble, which makes a clearer separation between Android and the drivers required for individual hardware. Sounds good to me! I would have preferred to run LineageOS on a somewhat less expensive device such as the Galaxy J line but there are no LineageOS builds for those at the time of writing.

Let’s Install

Installing LineageOS on the S9 was actually quite straight forward. The first thing to do is to unlock the bootloader which can be done in the ‘developer’ menu that can be made visible by tapping on the “build number” field in the system menu several times. In the ‘developer’ menu, activate ‘OEM unlock’. I didn’t do this at first so flashing the recovery image (see below) failed. There are rumors the bootloader can only be unlocked after a week of using the phone but this was certainly not the case for my device.

Next, I downloaded the TWRP recovery image the LineageOS page for the S9 points to, the LineageOS image itself and the root access apk zip file. I didn’t download any GApps image as I noticed in the past that none of the apps I am using requires the Google APIs, plain Android is enough! Very liberating!

The First Reboot Fails

After installing the TWRP recovery image, I booted into recovery and installed the LineageOS binary and root apk zip file from an SD card. Easy, or so I thought. At the end of the flash process a few error messages were shown that there was something wrong with the data partition. Not much to be done I thought and rebooted. The LineageOS logo came up but the boot process ended with a failure message about the data partition not being accessible. O.k., so back to the recovery image for some tweaking. After a while I figured out that the problem was due to the data partition being encrypted and that formatting instead of deleting it was the solution. This option is a bit hidden in the TWRP installation menu so it wasn’t the obvious first choice. After re-installing with the formatting option activated, the device booted like it should.

New Safeguards Installing APKs

The first thing I did once the system was running was to check how to install the F-Droid store on Android 8. Rumors had it that Google has changed again the way software can be installed outside the Play store to make it safer for the ordinary user. On LineageOS, the additional safeguard was to move the F-Droid APK file from the download folder into the documents folder with the file explorer app and then execute it from there.

Almost Working Now

After installing a number of applications I remembered that I read that some people had issues with the camera when installing LineageOS over an older original Samsung software. And indeed I had the same problem, the camera would not work. Fortunately, I could get the Samsung software that people in the forum reported is required to get the camera working so I flashed the S9 back to its original software. Once done, I tested the camera which was working with the original Samsung software and then flashed back to LineageOS once more. That cost a bit of time but was well worth it because the camera started working.

Improvements over the S5

And that was pretty much it as far as the installation process is concerned. I’ve been running LineageOS on the S9 for a few days now and haven’t seen any major hickups so far. Things that have much improved over the S5 are the much better camera and the much faster processor that makes heavy apps like Firefox and OpenStreetmaps for Android (OSMAND) run much faster than on the S5. Also, I am really happy to have the clock and status indicator back on the display when the device is locked. The last time I had this feature was on my Nokia N8 many years ago and I could never understand why this very useful feature was not copied by other companies. One thing not immediately visible is the vast number of LTE frequency bands supported by the S9. Compared to the S5, which mainly supported European and Asian bands, the S9 comes with a much broader range of supported bands, which is especially important if you happen to be in North America every now and then. The S5 does work there as well but some important bands were missing so I would sometimes have no reception.

And one more thing I find that has much improved is the Wideband-AMR sound quality when making phone calls. It’s like night and day compared to the S5, which could also do WB-AMR. Also, the speaker in the device has improved, so making phone calls in handsfree mode or listening to music while the phone is on the desk is much better compared to the S5. The only thing that so far really bugs me about the S9, except for the high price, is that the battery can’t be replaced. I would sacrifice quite a bit for a replaceable battery but I don’t see a good alternative that works with LineageOS at this point in time.

So lets see how the S9 fares over time, watch out for a follow up post!

18 Oct 22:27

Encrypted SNI Comes to Firefox Nightly

by Eric Rescorla

TL;DR: Firefox Nightly now supports encrypting the TLS Server Name Indication (SNI) extension, which helps prevent attackers on your network from learning your browsing history. You can enable encrypted SNI today and it will automatically work with any site that supports it. Currently, that means any site hosted by Cloudflare, but we’re hoping other providers will add ESNI support soon.

Concealing Your Browsing History

Although an increasing fraction of Web traffic is encrypted with HTTPS, that encryption isn’t enough to prevent network attackers from learning which sites you are going to. It’s true that HTTPS conceals the exact page you’re going to, but there are a number of ways in which the site’s identity leaks. This can itself be sensitive information: do you want the person at the coffee shop next to you to know you’re visiting

There are four main ways in which browsing history information leaks to the network: the TLS certificate message,  DNS name resolution, the IP address of the server, and the TLS Server Name Indication extension. Fortunately, we’ve made good progress shutting down the first two of these: The new TLS 1.3 standard encrypts the server certificate by default and over the past several months, we’ve been exploring the use of DNS over HTTPS to protect DNS traffic. This is looking good and we are hoping to roll it out to all Firefox users over the coming months. The IP address remains a problem, but in many cases, multiple sites share the same IP address, so that leaves SNI.

Why do we need SNI anyway and why didn’t this get fixed before?

Ironically, the reason you need an SNI field is because multiple servers share the same IP address. When you connect to the server, it needs to give you the right certificate to prove that you’re connecting to a legitimate server and not an attacker. However, if there is more than one server on the same IP address, then which certificate should it choose? The SNI field tells the server which host name you are trying to connect to, allowing it to choose the right certificate. In other words, SNI helps make large-scale TLS hosting work.

We’ve known that SNI was a privacy problem from the beginning of TLS 1.3. The basic idea is easy: encrypt the SNI field (hence “encrypted SNI” or ESNI). Unfortunately every design we tried had drawbacks. The technical details are kind of complicated, but the basic story isn’t: every design we had for ESNI involved some sort of performance tradeoff and so it looked like only sites which were “sensitive” (i.e., you might want to conceal you went there) would be willing to enable ESNI. As you can imagine, that defeats the point, because if only sensitive sites use ESNI, then just using ESNI is itself a signal that your traffic demands a closer look. So, despite a lot of enthusiasm, we eventually decided to publish TLS 1.3 without ESNI.

However, at the beginning of this year, we realized that there was actually a pretty good 80-20 solution: big Content Distribution Networks (CDNs) host a lot of sites all on the same machines. If they’re willing to convert all their customers to ESNI at once, then suddenly ESNI no longer reveals  a useful signal because the attacker can see what CDN you are going to anyway. This realization broke things open and enabled a design for how to make ESNI work in TLS 1.3 (see Alessandro Ghedini’s writeup of the technical details.) Of course, this only works if you can mass-configure all the sites on a given set of servers, but that’s a pretty common configuration.

How do I get it?

This is brand-new technology and Firefox is the first browser to get it. At the moment we’re not ready to turn it on for all Firefox users. However, Nightly users can try out this enhancing feature now by performing the following steps: First, you need to make sure you have DNS over HTTPS enabled (see: Once you’ve done that, you also need to set the “” preference in about:config to “true”). This should automatically enable ESNI for any site that supports it. Right now, that’s just Cloudflare, which has enabled ESNI for all its customers, but we’re hoping that other providers will follow them. You can go to: to check for yourself that it’s working.

What’s Next?

During the development of TLS 1.3 we found a number of problems where network devices (typically firewalls and the like) would break when you tried to use TLS 1.3. We’ve been pretty careful about the design, but it’s possible that we’ll see similar problems with ESNI. In order to test this, we’ll be running a set of experiments over the next few months and measuring for breakage. We’d also love to hear from you: if you enable ESNI and it works or causes any problems, please let us know.

The post Encrypted SNI Comes to Firefox Nightly appeared first on Mozilla Security Blog.

18 Oct 22:27

iPhone XR Hands-On Videos Offer Best Look Yet at Apple’s Latest Flagship

by Ryan Christoffel

Today a variety of YouTube videos have been published featuring hands-on looks at the iPhone XR, which becomes available for pre-order tomorrow and ships Friday, October 26th. We've embedded several of the best videos below.

One common message across multiple videos is that the iPhone XR doesn't feel at all like a budget phone. Despite its similarities, this isn't the iPhone 5C all over again; instead, the iPhone XR feels very much like a premium device, just at a much lower cost than the iPhone XS and XS Max.

Sara Dietschy's video includes a variety of important camera details. She mentions how Portrait mode on the XR only works on human subjects due to the way the single-lens camera operates, and interestingly she also says that the two Stage Light effects in Portrait mode aren't available with the rear-facing camera. It's not all bad news though: one camera advantage of the XR over the XS is that you don't need to back up as far when taking Portrait photos since there's only one lens being used, and all depth recognition is being done via software rather than the dual-lens hardware.

A video by Tyler Stalman mentions a couple other strengths of the XR, including that its LCD display looks excellent, and that even though the device is technically heavier than the XS due to being larger, it actually feels lighter in the hand relative to what he expected. He also highlights a detail about the device that was shared in Apple's September keynote, but which has been commonly overlooked: the XR actually has the longest-lasting battery of all available iPhones, lasting 1.5 hours longer than last year's 8 Plus model.

Finally, a short video by SuperSaf TV provides the best look yet at the six color options for the XR. I especially appreciated getting a good view of the colors of each aluminum band that wraps the phone; I'm a big fan of the Coral option for the glass back, but that color's aluminum band is more orange than I'd prefer, which is good to know.

Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it's also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it's made in Italy.

Join Now
18 Oct 22:25

You can now order concert tickets with Eventbrite integration on YouTube

by Brad Bennett

Video streaming service YouTube is one of the most common places to watch musical content, and now the service is doubling down on selling real-world concert tickets via a partnership with Eventbrite.

Eventbrite is a U.S.-based event management company that also sells tickets to concerts and other events. Now that it has partnered with YouTube, users can buy concert tickets right from a video.

Videos on artist’s official channels will start to include a ‘Tickets’ button. The new update is only available to U.S. residents, but YouTube’s blog post mentions that it plans to expand to North America, then the rest of the world afterward.

This feature launched on YouTube with a Ticketmaster partnership in September 2017 to offer the same service, and it seems that it isn’t in Canada yet either. MobileSyrup has reached out to Google to see if these features are coming to Canada.

This is an interesting move for YouTube. Spotify has been selling tickets on its platform since 2017, so this lets YouTube compete with it in that regard, but if it stays in the U.S. Spotify has a big advantage in Canada.

Source: YouTube

The post You can now order concert tickets with Eventbrite integration on YouTube appeared first on MobileSyrup.

18 Oct 22:25

New Android app publishing format will mean smaller app sizes, faster downloads

by Sameer Chhabra

Mountain View search giant Google has takens steps to make it easier for developers to pack more features into their apps, while simultaneously reducing app file sizes.

According to an October 18th, 2018 blog post written by Google Play product manager Dom Elliott, Google’s new ‘Android App Bundle’ file format allows developers to package together multiple app versions for different devices and different languages into a single comprehensive file.

Once the app has been processed by Google Play, Google’s app manager splits the app bundle into individual APK files meant for “every possible device configuration and language that [developers] support.”

“When a user installs the app, Play delivers the base split APK (all the code that’s common for every device), the language split APKs (for the languages the user speaks), and the device configuration split APKs (for the device’s screen size and the CPU architecture),” wrote Elliott, in the same October 18th blog post.

“This means the device gets just what it needs without wasted space.”

Once apps have been installed on user devices, Google Play will deliver additional files depending on necessity — for example, when a user changes their device language.

Since the Android App Bundle format is modular, developers can also add more features to their apps, without needing to worry about larger app sizes.

“Any app functionality can be contained in a dynamic feature module and delivered on demand,” wrote Elliott.

“This new model results in dramatically smaller apps that take less time to download and less space on a device.”

Additionally, the new App Bundle format should allow developers to push out updates at a faster pace thanks to the new ‘In-app Updates API.’

“When an update is detected, you can either notify the user with a prompt asking them to update immediately or show a prompt for a flexible update at a time and in a way of your choosing,” wrote Elliott.

According to research gathered by Google Play, larger app sizes often lead to fewer app installations.

“Developers who are using the Android App Bundle have APK sizes that are on average 35% smaller than releasing a ‘universal APK’ (an APK packed with everything needed to support all device configurations and languages that the Android App Bundle supports),” wrote Elliott.

The Android App Bundle should serve as a way for developers to spend more time building their apps, and less time worrying about finding ways to combat Android’s device fragmentation.

The Android App Bundle format is both open-source and backwards compatible.

Source: Google

The post New Android app publishing format will mean smaller app sizes, faster downloads appeared first on MobileSyrup.

18 Oct 20:34

Consent management: can it even work?

Read the whole thing: Why Data Privacy Based on Consent Is Impossible, an interview with Helen Nissenbaum.

The farce of consent as currently deployed is probably doing more harm as it gives the misimpression of meaningful control that we are guiltily ceding because we are too ignorant to do otherwise and are impatient for, or need, the proffered service. There is a strong sense that consent is still fundamental to respecting people’s privacy. In some cases, yes, consent is essential. But what we have today is not really consent.

And, in Big Data's End Run Around Anonymity and Consent (PDF):

So long as a data collector can overcome sampling bias with a relatively small proportion of the consenting population, this minority will determine the range of what can be inferred for the majority and it will discourage firms from investing their resources in procedures that help garner the will- ing consent of more than the bare minimum number of people. In other words, once a critical threshold has been reached, data collectors can rely on more easily observable information to situate all individuals according to these patterns, rendering irrelevant whether or not those individuals have consented to allowing access to the critical information in question. Withholding consent will make no difference to how they are treated!

Is consent management even possible? Is a large company that seeks consent from an individual similar to a Freedom Monster?

What would happen if consent had to be informed?

And what's going on with Judge Judy and skin care products? There are thousands of skin care scams on Facebook and other places on the internet that falsely state that their product is endorsed by celebrities. These scams all advertise a free sample of their product if you pay $4.95 for the shipping. Along the way, you have to agree to the terms and conditions....The terms and conditions are only viewable through a link you have to click, which most of these people never do.

Or Martin Lewis and fake bitcoin ads? He launched a lawsuit in April 2018, claiming scammers are using his trusted reputation to ensnare people into bitcoin and Cloud Trader "get-rich-quick schemes" on Facebook.

The problem is that ad media that have more data, and are better at facilitating targeting, are also better for deceptive advertisers. Somehow an ad-supported medium needs consent for just enough data to make the ads saleable, no more. As soon as excess consent enters the system, the incentive to produce ad-supported news and cultural works goes down, and the returns to scamming goes up.

See you at Mozfest? Related sessions: Consent management at Mozfest 2018

bonus links

FBI Brings Gun to Advertising Knife Fight

John Hegarty: Globalisation has hurt the marketing industry

What's There To Laugh About?

Advertising only ever works by consent

Mainstream Advertising is Still Showing Up on Conspiracy and Extremist Websites

Some dark thoughts on content marketing.

18 Oct 20:33

Google Maps now lets users share their trip progress on Android, iOS

by Sameer Chhabra

Mountain View search giant Google has taken steps to let users share their Google Maps journey information.

According to an October 18th, 2018 media release, Google Maps users on both Android and iOS are now able to share their live location, route and estimated time of arrival with anyone in their contacts list.

Once users have started navigating to a destination — whether they’re driving, walking or cycling — Google Maps will present an option to ‘Share trip progress.’

Users will also be able to share their location through third-party apps like Facebook Messenger, WhatsApp and Line.

Once a user’s journey has ended, Google Maps will automatically stop sharing their location.

“Getting where you need to go is important, but making it to your destination safe and sound is the most important thing of all,” said Samuel Mclean, product manager for Google Maps, in the same October 18th media release.

U.S.-based ride-sharing platform Uber also allows users to share their location with individuals in their contacts lists thanks to the app’s ‘Share My Trip’ feature.

Google Maps is free-to-download on Android and iOS.

Source: Google

The post Google Maps now lets users share their trip progress on Android, iOS appeared first on MobileSyrup.

18 Oct 20:33

Chrome 70 update features fingerprint authentication for websites and rounded search bar widget

by Dean Daley

Google is rolling out Chrome 70 for Android on the Play Store throughout the next few weeks. While Google says the update comes with stability performances, 9to5Google’s teardown reveals much more.

For privacy reasons, Chrome 70 will not include “the Android and iOS build number in the user-agent identification string visible to websites,” according to 9to5Google. Reportedly, the change helps prevent users from being targeted or fingerprinted by malicious sites or actors.

Additionally, Chrome 70 allows a website to access a handset’s fingerprint sensor to use as web authentication.

Chrome 70 will also add a Material Design search widget with a pill-shaped bar, replacing the square search bar.

Further, there is a “downloads” menu so users can easily see where the phone stores downloaded files. Lastly, users will be able to turn on an option that says “Ask where to save files.”

Source: 9to5Google

The post Chrome 70 update features fingerprint authentication for websites and rounded search bar widget appeared first on MobileSyrup.

18 Oct 13:56

Pixelmator Pro Updated with Machine Learning Auto Enhancement, Light and Dark Modes, and Automator Actions

by John Voorhees

Pixelmator Pro for the Mac was updated to version 1.2 today with a handful of enhancements centered around macOS Mojave.

The update includes light and dark modes, which can be set in Preferences to follow the mode picked in System Preferences or full-time light or dark mode. Dark mode closely resembles Pixelmator Pro’s existing UI, but its light mode is brand-new.

Pixelmator Pro's new light mode.

Pixelmator Pro's new light mode.

Pixelmator Pro 1.2 has also added a new auto-enhance feature for images that applies machine learning to automatically adjust white balance, exposure, hue and saturation, lightness, color balance, and selective color. Previously auto-enhancement was available individually for some of the categories in Pixelmator’s Adjust Colors tab. The new ML Enhance feature, which the Pixelmator team says was trained with millions of professional photos, adjusts all of the categories listed above at once. If you don’t like the results, the adjustments can be turned off on a per category basis or adjusted manually.

Automatic photo enhancement is a feature available in most photo editing apps. It’s not magic, but if it’s done well, auto-enhance is often enough for most photos or at least a good start from which you can refine the adjustments by hand.

ML Enhance can be turned on or off by category of adjustment and manually tweaked.

ML Enhance can be turned on or off by category of adjustment and manually tweaked.

In my limited testing, Pixelmator Pro’s new ML Enhance does a solid job overall, but it did have trouble with some photos, especially with color adjustments. I applied ML Enhance to a variety of JPG images taken in 2011 with my Sony NEX-7 camera and more recently with my iPhone X and XS Max. Evaluating auto-adjusted images is inherently subjective, but to my eye, images that were originally a little washed out with muted colors seemed to benefit the most from ML Enhance, while brighter, more highly-saturated colors tended to be softened by ML Enhance or take on an overly cool or warm tone based on other parts of the photograph. As with other apps I’ve used, auto-enhance is worth a try when you want to adjust photos quickly, but it’s still worth keeping a critical eye on the results because they may not be to your liking, machine learning or not.

Pixelmator Pro 1.2 adds five Automator actions too:

  • Auto Enhance Images, which applies Pixelmator Pro’s new ML Enhance feature
  • Auto White Balance Images
  • Apply Color Adjustments to Images
  • Apply Effects to Images
  • Change Type of Images for converting Pixelmator-formatted images to a variety of other image file types

The new actions open up a wide variety of possible workflows. I especially like the Change Type of Images action because I often work on screenshots in Pixelmator and instead of exporting each one individually as a JPG or PNG files, I can select them all and use a Quick Action I built to batch-convert them.

Continuity camera is supported in Pixelmator Pro too. Just click the plus button in Pixelmator Pro’s toolbar, and you can take a photo or scan a document with an iOS device running iOS 12, which appears almost instantly in Pixelmator. The app has added SVG fonts too.

Pixelmator Pro has added some terrific features since it debuted late last year with regular updates that adopt the latest macOS technologies. Version 1.2 is no different and should make the process of editing large batches of photos faster with its new Automator actions and ML Enhance, both of which are worth checking out.

Pixelmator Pro 1.2 is a free update to existing users and is available to new users on the Mac App Store for $29.99.

Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it's also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it's made in Italy.

Join Now
18 Oct 13:46

Rogers announces plans to launch LTE-M network for IoT devices

by Sameer Chhabra
Rogers logo

Toronto-based national telecom service provider Rogers has announced plans to launch an LTE-M network to power internet of things (IoT) devices.

According to an October 18th, 2018 media release, Rogers will first roll out its LTE-M network in Ontario before the end of 2018.

Additional provinces will be able to connect to the network throughout 2019, and the entire country will be able to by 2020.

“As leaders in IoT, we are committed to supporting our customers as they explore the capabilities and benefits available through Rogers rapidly growing IoT ecosystem,” said Dean Prevost, president of enterprise at Rogers Communications, in the same October 18th media release.

“With the launch of LTE-M, we are empowering the adoption of reliable, low cost, and secure IoT solutions that support a variety of use cases such as asset tracking, smart cities, utilities, transportation, and supply chain management.”

LTE-M, also known as LTE Cat M1, is designed to connect IoT devices to the internet.

More specifically, it’s designed to allow low-power devices, such as crop and harvest sensors, to operate without the constant need to recharge batteries.

Montreal-based national carrier — and Rogers competitor — Bell previously announced plans to launch its own LTE-M network in 2018.

According to Nigel Wallis, vice president of IoT and industry research at IDC Canada, approximately 81 percent of medium and large Canadian organized use IoT devices today.

“The development of industry-specific IoT solutions addresses unique business needs, like smart utilities and smart asset tracking,” said Wallis, in the same media release.

“Low-power wide area networks (LPWAN) enable businesses to re-think traditional operations practices, and to innovate in ways they would not have attempted before.”

Rogers is currently accepting applications from IoT “solution providers” who are interested in working with the carrier.

Rogers is also accepting LTE-M field trial applications.

Source: Rogers

The post Rogers announces plans to launch LTE-M network for IoT devices appeared first on MobileSyrup.