Shared posts

16 Aug 23:05

Google Pixel 2 Passes Through FCC, Reveals Key Details

by Evan Selleck
The Google Pixel 2 and Pixel XL 2 are some of the most highly-anticipated devices of the year, and thanks to the FCC we may have at least a couple details confirmed ahead of the announcement. Continue reading →
16 Aug 23:05

Daily Scot – Poppin’ up in New West

by Scot Bathgate

While walking down Sixth Street in New Westminster I came across this temporary seating “Parklet” in former vehicle parking at the intersection with Belmont Street.  These urban interventions are always more successful when tied to a adjacent cafe or takeout, in this case Tim Hortons.  The before and after photos give you an idea of the powerful impact of enhancing the public realm with minimal effort and cost.

2017-08-08 18.30.392017-08-08 18.31.17

The previous street condition below:

Google-2Google-1


16 Aug 23:05

The Best Earplugs for Sleeping

by Geoff Morrison
A closeup of an assortment of purple and yellow earplugs.

After spending 12 hours measuring the noise-reduction prowess of 25 different earplugs and having four panelists sleep with the top performers, we think Mack’s Slim Fit Soft Foam Earplugs are the best for most people. They fit a wide variety of ears and are notably effective in reducing overall noise, including snoring, traffic, and the like. But they won’t fit everyone—no earplug will.

16 Aug 23:05

BbWorld Report: Blackboard May Be Turning Around

files/images/New-Implementations-NA-CC.png

Michael Feldstein, e-Literate, Aug 18, 2017


Icon

This long post makes the case that Blackboard may be turning around but the more interesting reading is the analysis of the market that sees virtually all new implementations in the Canada-USA higher education space being either Instructure's Canvas or Desire2Learn's Brightspace. Blackboard has bottomed out (and according to the authors the turnaround won't start for at least 12 months) and, interestingly, so has Moodle. A big part of this, I think, is that the market is saturated, which means that you can't really depend on this data to make predictions. Blackboard and Moodle still have huge user bases. I'm reading in this article two major things supporting the case for Blackboard: a renewed interest in product development, and an increasing emphasis on openness and honesty. 

[Link] [Comment]
16 Aug 23:05

Open Invitation to contribute toward the Ljubljana OER Action Plan

files/oer.svg

UNESCO, Aug 18, 2017


Icon

Cable Green writes by email "UNESCO has released a draft "OER Action Plan" and has asked for our comments and feedback. The draft OER Action Plan is available in English and French."  The recommendations are pbroken down into five major categories: capacity-building and usage; language and cultural issues; access; changing business models; and policy. The report also "points to the urgency for new approaches, recalling that on current trends only 70% of children in low income countries will complete primary school by 2030; a goal that should have been achieved in 2015." There's a form for input but it comes with the warning that "while all individual inputs are most welcome, we encourage inputs that are submitted collectively and/or endorsed by institutions." Also, input "may be positively evaluated based on such factors as the number of like-minded comments received, the source of contribution including governmental, IGOs, NGOs as well as institutions of teaching and learning, and a balance of geographical representation." That doesn't sound very open and inviting. 

[Link] [Comment]
16 Aug 23:05

Freedom to Learn

Jackie Gerstein, User Generated Education, Aug 18, 2017


Nice discussion of Rogers (1969) five defining elements of significant or experiential learning (quoted):

  1. It has a quality of personal involvement – Significant learning has a quality of personal involvement in which “the whole person in both his feeling and cognitive aspects [is] in the learning event” (p. 5).
  2. It is self-initiated – “Even when the impetus or stimulus comes from the outside, the sense of discovery, of reaching out, of grasping and comprehending, comes from within” (p. 5).
  3. It is pervasive – Significant learning “makes a difference in the behavior, the attitudes, perhaps even the personality of the learner” (p. 5).
  4. It is evaluated by the learner – The learner knows “whether it is meeting his need, whether it leads toward what he wants to know, whether it illuminates the dark area of ignorance he is experiencing” (p. 5).
  5. Its essence is meaning – “When such learning takes place, the element of meaning to the learner is built into the whole experience” (p. 5).

Jackie Gerstein comments, "So the push towards self-directed learning – helping learners develop skills for directing their own learning really isn’t new BUT the Internet, social media, and open-source content just make it easier for the educator actually implement these practices especially when working with groups of students."

[Link] [Comment]
16 Aug 23:05

“We were all at fault, and we were all victims too.”

files/images/Fnaa5uaqPvGbJ6XHTPEbPE-650-80.jpg

Metafilter, Aug 18, 2017


Icon

Discussion of the game No Man's Sky, which just came out with a new edition called 'Atlas Rises'. I've been trying it out. The same lovely universe is still there for the exploring, but more features and more gameplay has been added, including game-defined quests you can pursue. There's also some very limited interpersonal interaction with other players. What I really like about the game is that while all of these can be scripted in a closed-environment game (just as they can be scripted in a closed-environment course) the designers are taking on the much more difficult task of working in a generated environment that is, for all practical pruposes, unlimited.

[Link] [Comment]
16 Aug 22:59

inReach in Nepal: Real Stories from the Earthquake

by garminblog
mkalus shared this story from garminblog's YouTube Videos.

From: garminblog
Duration: 03:35

On April 25, 2015 at 11:56 AM a 7.8 magnitude earthquake shook Nepal, claiming the lives of more than 8,000 people and permanently changing the lives of countless more. As the dust settled, the world mobilized to provide relief and help the people of Nepal recover from such a devastating event. There were over 50 inReach devices in Nepal when the earthquake struck, and in the days and weeks following they were used to help coordinate disaster assistance around the country. Climbers, guides, doctors and relief agencies depended on the inReach for communication and emergency response when it mattered most.

This video takes a look at the diversity of Nepal and its people, while recounting two amazing stories of how inReach helped facilitate the long path to recovery.

16 Aug 22:58

DOJ Demands Company Turn Over Info On 1.3 Million Visitors To Anti-Trump Website

by Kate Cox
mkalus shared this story from Consumerist.

Since most of us aren’t looking at websites via a Tor connection, we’re leaving digital footprints all over the place. The sites you visit may have a surprising amount of information on you, even if you’re not logged in, and even if you went to that site inadvertently. That’s why the Justice Department is trying to compel a web-hosting company to turn over everything it knows about anyone who ever clicked on a site that is critical of President Trump. It’s also why that company is fighting against this demand.

The company hosting the site, DreamHost, explained in a blog post what was going on.

It’s very common for tech companies, including hosting services, to get requests for data from law enforcement. DreamHost, like its peers, has a legal team in-house that receives search warrants and other requests and decides how to respond.

In the case of “vague or faulty orders,” DreamHost writes, the legal team “rejects or challenges” the requests. Including this one.

What do the feds want?

The Justice Department has requested that DreamHost provide a list of the IP address of every single visitor to the website.

That’s not just content creators or subscribers; that’s everyone who ever typed the address into their browser or clicked the link on Facebook. And it’s a list that’s 1.3 million visitors long.

Along with the IP addresses, DreamHost says, the warrant also asks for the dates and times of the visits, along with browser and operating system information.

In short, the feds are asking the site host to send them all the information they need to peg down exactly who visited the site, when, and from where.

There can be times when site visitor log information is absolutely vital to law enforcement activity. For example, if someone is accused of trying to plant a bomb, having proof that the suspect accessed a certain site about bomb-making, from their home, one week before planting the bomb, would be valuable information.

Related: Without internet privacy rules, how can I protect my data?

But that’s not what this request is. This warrant asks for the identifying information of every single person who visited a website during a period of time, full stop.

Why are they asking?

The site in question is disruptj20.org, created to help organize one of many protests set against the backdrop of President Trump’s Jan. 2017 inauguration.

Hundreds of protests were held nationwide during the inauguration weekend, Jan. 20-22, including the massive women’s marches, largely without incident.

But the DisruptJ20 protest, which occurred on Inauguration Day, resulted in more than 200 arrests. Many of those arrested have since been indicted on, and are now facing, felony rioting charges.

Many of the cases are still ongoing, and in various stages of investigation and prosecution. Although DreamHost did not have access to the specific reason why they were served this search warrant, it seems very likely to be related to those cases.

The answer: Nope!

DreamHost, the company writes, found the request to be “a strong example of investigatory overreach and a clear abuse of government authority,” and so they challenged the warrant.

This, the company says, is its standard procedure. But “instead of responding to our inquiries regarding the overbreadth of the warrant,” DreamHost writes, the Justice Department filed a motion asking for a court order to compel the records.

DreamHost is fighting back. The company filed arguments opposing the court order, and both the company and the DOJ will be arguing it out at a hearing on Friday, Aug. 18.

“Internet users have a reasonable expectation that they will not get swept up in criminal investigations simply by exercising their right to political speech against the government,” DreamHost concludes. “We intend to take whatever steps are necessary to support and shield these users from what is, in our view, a very unfocused search and an unlawful request for their personal information.”

Want more stories from Consumerist? We’re a non-profit! You can get more stories like this in our twice weekly ad-free newsletter! Click here to sign up.





16 Aug 22:58

Amazon’s New ‘Instant Pickup’ Service Should Just Be Called ‘Going To The Store’

by Mary Beth Quirk
mkalus shared this story from Consumerist.

Here’s the newest concept from Amazon: A place where customers can quickly pick up a snack or a roll of toilet paper without having to wait in long lines. It’s called “Instant Pickup,” but you might already know it by a more familiar name: “Going to the store.”

Instant Pickup allows Prime and Prime Students members to choose from a curated daily assortment of “essentials” — from the bag of chips and the cold drinks you need to survive a hangover to a new phone charger to replace the one you lost last night — that can then be picked up from self-service lockers.

To use the service, customers shopping on the Amazon app can tap the menu button at the top of the app, then look for Instant Pickup in “Programs and Features.”

Customers then select their items, place the order — or add last-minute items to an online order if they need to — and then pick them up from a designated pickup location. Orders will be ready in as little as five minutes, Amazon says.

There are only five locations offering the service thus far — Los Angeles, Atlanta, Berkeley, CA, Columbus, and College Park, MD — but more locations will be added in the coming months, the company notes.

It’s no secret that Amazon has its eye on the younger set: The company knows that if it wants to get newly minted adults hooked on its service for life, offering them everything they need, when they need it, could be a good way to hook ’em.

To that end, Prime Student only costs half of what the grownup version costs. Amazon currently offers 22 pickup lockers on colleges campuses across the country for members of the service.





16 Aug 22:58

Uber Settles Federal Allegations It Deceived Customers About Privacy & Data Security

by Ashlee Kieler
mkalus shared this story from Consumerist.

Uber has reached a deal with the Federal Trade Commission to settle the government’s investigation into the ride-hailing service’s allegedly questionable privacy practices.

As part of the settlement, Uber will implement a comprehensive privacy program and obtain regular, independent audits in order to settle charges that it deceived customers by failing to secure their private, personal information.

In a complaint [PDF] against the San Francisco-based company, the FTC claims Uber failed to closely monitor employee access to customer and driver data, or create reasonable measures to secure users’ personal information that was stored on a third-party server.

Employee Access

In late Nov. 2015, Uber landed in hot water after an executive discussed the idea of digging up dirt on journalists critical of the company.

At the time, Uber executive Emil Michael suggested Uber should spend millions of dollars to hire a team of opposition researchers to spread details of the personal life of Sarah Lacy with PandoDaily – a Silicon Valley site that has a rather contentious relationship with the ride service.

The executive’s comments to Buzzfeed came a month after the journalist wrote an article about her decision to delete Uber’s app after a promotion by the company in France offered to pair riders with “hot chicks.” The journalist encouraged others to ditch the app, too.

Related: “Security As An Afterthought:” 3 Frightening Privacy Claims From Former Uber Staffers

While the Uber executive issued an apology for his remarks, saying they were supposed to be off the record, the company was the subject of intense scrutiny from once-loyal users after additional reports suggested the company used an internal aerial tracking tool, dubbed “God View,” that allowed employees to easily track riders.

In an attempt to resolve these issues, Uber assured customers that it had a “strict policy prohibiting” employees from accessing rider or driver data, unless the company has a legitimate business purpose to do so. According to Uber, at the time, legitimate business purposes included:

• Supporting riders and drivers in order to solve problems brought to their attention by the Uber community.
• Facilitating payment transactions for drivers.
• Monitoring driver and rider accounts for fraudulent activity, including terminating fake accounts and following up on stolen credit card reports.
• Reviewing specific rider or driver accounts in order to troubleshoot bugs.

Uber promised customers that employee access to information would be closely monitored on an ongoing basis.

The company even developed an automated system to monitor employee access to consumer personal information. However, the FTC claims the company stopped using it less than a year after it was put in place.

According to the complaint, from approximately Aug. 2015 until May 2016, Uber did not follow up on automated alerts concerning the potential misuse of consumer personal information, and for approximately the first six months of this period, the company only monitored access to account information belonging to a set of internal high-profile users, such as Uber executives.

During this time, the complaint alleges that company did not monitor internal access to personal data unless an employee specifically reported a co-worker had engaged in inappropriate access.

Data Storage

Additionally, the FTC complaint alleges that Uber failed to provide on claims that all customer information was “securely stored within our databases.”

In several instances, the FTC claims Uber’s customer service reps assured customers of the strength of the company’s security practices.

“Your information will be safely and used only for purposes you’ve authorized,” a rep said, according to the complaint. “We use the most up-to-date technology and services to ensure that none of these are compromised.”

“I understand that you do not feel comfortable sending your personal information via online,” another rep allegedly said. “However, we’re extra vigilant in protecting all private and personal information.”

Despite this, the FTC claims the company’s actual security practices failed to prevent unauthorized access to customers’ personal information.

Until Sept. 2014, Uber failed to implement reasonable access controls to safeguard data stored in third-party databases, such as requiring engineers and programmers to use distinct access keys to access personal information of drivers and customers, according to the complaint.

Instead, Uber allowed employees to use a single key that gave them full administrative access to all the data, and did not require multi-factor authentication for accessing the information.

In addition, Uber stored sensitive consumer information, including geolocation information, in plain readable text in database back-ups stored in the cloud, the complaint states.

As a result of these alleged failures, the FTC claims an intruder was able to access personal information about Uber drivers in May 2014, including more than 100,000 names and driver’s license numbers that were stored in a datastore operated by Amazon Web Services, according to the complaint.

Uber did not discover this intrusion until Sept. 2014, the complaint alleges, and only then did the company take steps to prevent further unauthorized access.

While Uber initially identified nearly 49,000 drivers affected by the breach, the company discovered in the summer of 2016 that an additional 60,000 drivers’ information was accessed.

The Settlement

Under today’s settlement, Uber is prohibited from misrepresenting how it monitors internal access to consumers’ personal information, and from misrepresenting how it protects and secures that data.

The company is required to implement a comprehensive privacy program that addresses privacy risks related to new and existing products and services and protects the privacy and confidentiality of personal information collected by the company.

Additionally, Uber is required to obtain within 180 days, and every two years after that for the next 20 years, independent, third-party audits certifying that it has a privacy program in place that meets or exceeds the requirements of the FTC order.

Acting FTC chairman Maureen Ohlhausen noted on a call today that while the settlement does not include a penalty, Uber could be fined if it is found to violate the agency’s order.

Consumerist has reached out to Uber for comment on the settlement. We’ll update this post if we hear back.





16 Aug 22:57

Facebook’s Hate Speech Policies Censor Marginalized Users

mkalus shared this story .

As queer artists and activists who have challenged Facebook’s “real names” policy for three years, we’re alarmed by a new trend: Many LGBTQ people’s posts have been blocked recently for using words like “dyke,” “fag,” or “tranny” to describe ourselves and our communities.

WIRED OPINION

ABOUT

Dottie Lux (@redhotsburlyq) is an event producer and the creator of Red Hots Burlesque, a queer burlesque and cabaret; she is also a co-owner at San Francisco’s Legacy Business The Stud Bar. Lil Miss Hot Mess (@lilmisshotmess) is a PhD student in media studies at NYU by day and a drag queen by night. Both are organizers with the #MyNameIs campaign.

While these words are still too-often shouted as slurs, they’re also frequently “reclaimed” by queer and transgender people as a means of self-expression. However, Facebook’s algorithmic and human reviewers seem unable to accurately parse the context and intent of their usage.

Whether intentional or not, these moderation fails constitute a form of censorship. And just like Facebook’s dangerous and discriminatory real names policy, these examples demonstrate how the company’s own practices often amplify harassment and cause real harm to marginalized groups like LGBTQ people, communities of color, and domestic violence survivors—especially when used as a form of bullying to silence other users for their identities or political activities.

For example, we’ve received reports from several people whose posts about their LGBTQ activism were taken down. Ironically, one was attorney Brooke Oliver, who posted about a recent Supreme Court ruling related to her historic case that won Dykes on Bikes (a group of motorcycle-riding lesbians that traditionally leads gay pride parades) a trademark.

Two individuals wrote that they were reported for posting about the return of graphic novelist Alison Bechdel’s celebrated Dykes To Watch Out For comic strip. One happened to be Holly Hughes, who is no stranger to censorship: She’s a performance artist and member of the infamous NEA Four. A gay man posted that he was banned for seven days after sharing a vintage flyer for the 1970s lesbian magazine DYKE, which was recently featured in an exhibition at the Museum of the City of New York. A queer poet of color’s status update was removed for expressing excitement in finding poetry that featured the sex lives of “black and brown faggots.”

A young trans woman we heard from was banned for a day after referring to herself as a “tranny” alongside a selfie that proudly showed off her new hair style. After she regained access, she posted about the incident, only to be banned again for three more days. She also highlighted double-standards in reporting, noting that in her experience men often use the term to harass her, but are rarely held accountable. Many others also shared stories of reporting genuinely homophobic, transphobic, racist, and sexist content, only to be told it didn’t violate Facebook’s “Community Standards.”

Additionally, former RuPaul’s Drag Race contestant Honey Mahogany was unable to purchase an ad featuring the hashtag #blackqueermagic for an event that features a cast of African-American performers. It turns out that Facebook prohibits ads with “language that refers to a person’s age, gender, name, race, physical condition, or sexual orientation” (though it is easy enough to target users based on identity regardless). While such policies may rightfully prevent discrimination in legally protected areas like employment or housing, they cast too wide a net and ultimately discriminate against communities in cases like this.

And these stories are just the tip of the iceberg. Facebook, of course, has recently seen many public controversies for (temporarily) removing content like Nick Ut’s famous photo of Kim Phúc fleeing a napalm attack and video of Philando Castile’s murder by police.

Interestingly, in a recent blog post on the difficulty of moderating hate speech, Facebook vice president Richard Allan offered “dyke” and “faggot” as challenging examples, noting that, “When someone uses an offensive term in a self-referential way, it can feel very different from when the same term is used to attack them.”

However, as with its real names policy, while Facebook’s intentions may be noble, its algorithms and human-review teams still make too many mistakes. The company is also increasingly under pressure from users, groups, and now governments to improve its procedures—Germany just passed legislation requiring social media companies to remove hate speech.

We’ve identified four interrelated problems.

First, Facebook’s leadership doesn’t seem to understand the nuances of diverse identities. As leaked documents recently published by ProPublica indicate, its policies aim to prevent harassment of users based on “protected categories” like race, gender, and sexual orientation; however, by making exceptions for subsets of protected groups, the company’s protocols paradoxically “protect white men from hate speech but not black children,” as ProPublica reported. Such a color-blind and non-intersectional approach fails to acknowledge the ways in which groups discriminated against differently. (It is also not too surprising that Facebook ultimately protects white men, given its employee demographics.)

Second, Facebook’s approach to most issues, including authentic names or hate speech, is to create one-size-fits-all policies that it claims will work for the majority of users. However, given that Facebook’s user base just topped 2 billion, even something that affects 1 percent of users still affects 20 million people. Moreover, it appears that Facebook’s own policies aren’t applied consistently. Sometimes Facebook formally implements exceptions to rules, and then the risk of reviewers’ own opinions or biases interfering is huge.

Third, Facebook does not share the details of its enforcement guidelines, release data on the prevalence of hate speech, or give users an opportunity to appeal decisions and receive individualized support. With this lack of transparency and accountability, the company plays judge, jury, and executioner with its patchwork of policies, leaving many users stuck in automated customer service loops. And in cases in which users are banned, they are unable to participate in what is arguably one of our most important public forums. Like the telephone, Facebook is essentially a utility: To be abruptly cut off from one’s content and communities, based on arbitrary policies, can be annoying if not outright dangerous—especially for queer and trans people who rely on these connections for support.

Fourth, Facebook appears unwilling to invest adequate resources to address these issues. While it recently pledged to increase the size of its review team, even employing 7,500 people to moderate hundreds of thousands of reported posts each week seems paltry. Further, many reviewers may not be adequately trained or given enough time to interpret context and detail, especially with regards to diverse communities. This is exacerbated by the fact that may content reviewers are outsourced internationally, and don’t necessary have the cultural competence in the geographic areas they are reviewing.

Our experience with the #MyNameIs campaign—which scored an apology and several changes to the company’s “real names” policy, and continues to assist users in getting their accounts back—is that Facebook prioritizes and invests resources in rainbow-colored PR stunts and social experiments, but drags its feet when it comes to building tools that actually keep people safe.

We understand that it’s not easy to manage a platform that’s home to one-quarter of the world’s population, and that living in an increasingly digital culture comes with unanticipated challenges. And yet, if Facebook truly wants to “build community and bring the world closer together,” it has to do better. It must work with diverse communities to genuinely understand their needs, and implement policies and protocols that take into account their specificities—without doing harm, denying users’ identities, or preventing us from expressing ourselves.

When Facebook released its rainbow flag Pride reaction button earlier this summer, reviews were mixed. For us, a rainbow flag is a great symbol, but it’s not enough to wave—or click. If Facebook truly wants to be a safe place for queer, trans, and other diverse communities, it needs to fix its names and content moderation policies before LGBT users march on by to a better platform.

Dottie Lux (@redhotsburlyq) is an event producer and the creator of Red Hots Burlesque, a queer burlesque and cabaret; she is also a co-owner at San Francisco’s Legacy Business The Stud Bar. Lil Miss Hot Mess (@lilmisshotmess) is a PhD student in media studies at NYU by day and a drag queen by night. Both are organizers with the #MyNameIs campaign. WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.

16 Aug 22:57

Oregon just passed the only bike-specific tax in the country

mkalus shared this story from Pacific Northwest News.

On Thursday, the Oregon Legislature approved an ambitious $5.3 billion transportation tax and fee package. House Bill 2017, which passed the Senate 22-7, includes a 4-cent gas tax hike, a $16 vehicle registration fee increase, 0.1 percent payroll tax and 0.5 percent tax on new car sales.

It also includes something that no other state in the country has: a tax on the sale of bikes.

That means, unless opponents challenge the bill at the ballot or in court, starting Jan. 1, 2018, new bicycles with a wheel diameter of 26 inches or more and a retail price of $200 or more will be taxed a flat rate of $15.

The tax will be collected by retailers at the time of sale. It should be noted that most states do tax the sale of bikes because they have sales tax and tax the sale of most goods, which Oregon does not.

The money raised by the bike tax will go directly to projects "that expand and improve commuter routes for nonmotorized vehicles and pedestrians."

While lawmakers consider the passage of the bill a big win, bicycle activists are less enthusiastic.

"Congrats to Oregon," wrote Angie Schmitt on Streets Blog USA, "on its preposterous bike tax that accomplishes no discernible transportation goal except dampening demand for new bikes."

"The only way to like this tax is to think 1) it will quell the anger from people who think, 'Those bicyclists don't pay their fair share!' (it won't)," wrote Jonathan Maus, editor of BikePortland.org, "or 2) you think the money it raises for infrastructure outweighs the potential disincentive to new bike buyers, the erosion of profits from bike retailers, and the absurdity of it on principle alone."

"Time will tell I suppose," he added.

Others are already coming up with ways to skirt the law. One suggestion? Buy bikes with smaller wheels:

-- Lizzy Acker

503-221-8052
lacker@oregonian.com, @lizzzyacker

16 Aug 22:55

Basic Bike Maintenance

by Thea Adler
In this video, we will go over a few basic bike care tips that you can do at home!
16 Aug 22:52

FCC filing confirms Pixel 2 will be HTC-made, feature ‘Active Edge’ pressure-sensitive frame

by Igor Bonifacic
Google Pixel

A new HTC listing filed with the U.S. Federal Communications Commission confirms that the Pixel 2 will ship with a pressure sensitive side frame feature.

The filing, spotted by 9to5Google, includes screenshots of the upcoming phone’s settings menu. One of the menus includes a ‘Languages, inputs & gestures’ heading, under which it says ‘Active Edge on, squeeze for your Assistant.’

The same menu reveals the phone is running Android 8.0.1. This is significant insofar as that the current version of Android O, DP4, which Google itself has said is the final beta build before the official launch of Android O, is still at 8.0.0, suggesting that current Pixel 2 devices are on near final software.

Part of the Pixcel 2's FCC filing

The filing also confirms HTC is manufacturing the Pixel 2. Previously it was believed, based on past rumours, that LG was producing both the Pixel 2 and Pixel 2 XL.

Source: FCC Via: 9to5Google

The post FCC filing confirms Pixel 2 will be HTC-made, feature ‘Active Edge’ pressure-sensitive frame appeared first on MobileSyrup.

16 Aug 17:28

Effective collaboration: Tips for teaming with marketing coordinators

by Jason Lyman

Illustration for effective collaboration with marketing coordinators

Your team’s ready to kick off the next marketing campaign. You need an original idea, a great strategy, and most importantly, a marketing team that’s set up for success. Understanding the role everyone on the project plays can help your team function like a well-oiled machine. To better understand how teams collaborate, we talked with thousands of marketing and design professionals in the US. Today, we want to share what we learned. In this first post of our series on effective collaboration, we’ll begin looking at marketing personas to identify their pain points and find solutions to make working together easier for everyone on your team.

Recognize common roadblocks

Coordinators are at the center of every marketing project. They work across disciplines and departments to ensure a smooth campaign. By facilitating communication between colleagues, agencies, and freelancers, they guide a campaign’s major assets from beginning to end. The first step to improving collaboration with the coordinator is to identify common roadblocks:

  • Siloed teams and dispersed parties
  • Inefficient feedback and approval cycles
  • Fragmented information across email threads and other channels

Create a central hub

Next, agree on a single source of truth. Having one location for storing, sharing, and accessing project files allows the Coordinator to bridge the gap between internal and external members on the team.

Organize a feedback system

An efficient method for trafficking feedback rounds from the creative team to the client is the key to a Coordinator’s heart. Dropbox features, like commenting and annotations, help keep feedback exactly where it should be.

Two tips for Coordinators

  1. Know when to filter feedback, translate requests, and loop others into conversations.
  2. Set up the rest of the team for success by communicating clear expectations and deadlines, so that teammates are held accountable for their work.

Two tips for teammates

  1. Avoid inundating your Coordinator with questions and requests. Taking time to organize and consolidate your thoughts make it easier for them get the answers you need.
  2. Notify them early about red flags or uncertainties to avoid mishaps down the line.

Want to learn more solutions for facilitating better teamwork and brilliant campaigns? Download our free eBook, Team up for marketing success.

15 Aug 18:27

No Place for Office Space in Downtown Vancouver

by Sandy James Planner

microsoft-vancouver-09

Canadian Business Image

With some of the recent events and policies south of the Canadian border it’s no surprise that  there is a squish for office space, as reported in the Province by Sam Cooper. While vacancy rates have dropped from 8.3 per cent to 6.8 per cent and sound healthy compared to the housing market, they are not.

The report from professional services firm JLL says a tight Vancouver commercial real estate market will be driven by new demand from technology companies. Vacancy rates could dive from about seven per cent currently to three per cent in 2019, the JLL report says, which would be “the lowest vacancy rate on record.”

How low is a low office vacancy rate?  Cushman and Wakefield estimated that by 2019 “Vancouver is predicted to have the second-lowest office-vacancy rate in the Western hemisphere. ” The vice-president of the services firm JLL noted that he had never seen such a great demand from companies for Vancouver office space in 25 years of work . “A lot of the companies are from the U.S. The low Canadian dollar is attractive, and also we are a market where it is easier to bring in (high-technology) workers from overseas.”

To put that in better perspective, there was 2.3 million square feet of new office space built in the “downtown market” in the last two years. With the swift uptake of office space, it is expected that suburban Metro Vancouver communities will reap business relocations, with higher vacancy rates and lower rents, not to mention the fact that employees would have access to more affordable and varied  forms of housing.

The City of Vancouver observes that there are  new rezonings in Railtown, the False Creek Flats and in Mount Pleasant for new office space. The challenge is going to be finding the large floor plates and area amenities necessary to accommodate hundreds of new employees working in one office location. Will this be a driver for further office development in other parts of Metro Vancouver?

 

wework-golden-gate-san-franciscoDaily Hive Image

 


15 Aug 18:27

laughingsquid: A History of Space Travel, An Art Print Mapping...

15 Aug 18:26

Evolution of the Heroku CLI: 2008-2017

by Jeff Dickey

Over the past decade, millions of developers have interacted with the Heroku CLI. In those 10 years, the CLI has gone through many changes. We've changed languages several times; redesigned the plugin architecture; and improved test coverage and the test framework. What follows is the story of our team's journey to build and maintain the Heroku CLI from the early days of Heroku to today.

  1. Ruby (CLI v1-v3)
  2. Go/Node (CLI v4)
  3. Go/Node (CLI v5)
  4. Pure Node (CLI v6)
  5. What's Next?

Ruby (CLI v1-v3)

Our original CLI (v1-v3) was written in Ruby and served us well for many years. Ruby is a great, expressive language for building CLIs, however, we started experiencing enough problems that we knew it was time to start thinking about some major changes for the next version.

For example, the v3 CLI performed at about half the speed on Windows as it did on Unix. It was also difficult to keep the Ruby environment for a user's application separate from the one used by the CLI. A user may be working on a legacy Ruby 1.8.7 application with gems specific to Ruby 1.8.7. These must not conflict with the Ruby version and gem versions the CLI uses. For this reason, commands like heroku local (which came later) would have been hard to implement.

However, we liked the plugin framework of the v3 CLI. Plugins provide a way for us to nurse new features, test them first internally and then later in private and public beta. Not only does this allow us to write experimental code that we don't have to ship to all users, but also, since the CLI is an open-source project, we sometimes don't want to expose products we're just getting started on (or that are experimental) in a public repository. A new CLI not only needed to provide a plugin framework like v3, but also it was something we wanted to expand on as well.

Another reason we needed to rewrite the CLI was to move to Heroku's API v3. At the start of this project, we knew that the old API would be deprecated within a few years, so we wanted to kill two birds with one stone by moving to the new API as we rewrote the CLI.

Go/Node (CLI v4)

When we started planning for v4, we originally wanted the entire CLI to be written in Go. An experimental CLI was even done before I started at the company to rebuild the CLI in Go called hk. hk was a major departure from the existing CLI that not only changed all the internals, but changed all the commands and IO as well.

Parity with CLI v3

We couldn't realistically see a major switch to a new CLI that didn't keep at least a very similar command syntax. CLIs are not like web interfaces, and we learned this the hard way. On the web you can move a button around, and users won't have much trouble seeing where it went. Renaming a CLI command is a different matter. This was incredibly disruptive to users. We never want users to go through frustration like that again. Continuing to use existing syntax and output was a major goal of this project and all future changes to the CLI.

While we were changing things, we identified some commands that we felt needed work with their input or output. For example, the output of heroku addons changed significantly using a new table output. We were careful to display deprecation warnings on significant changes, though. This is when we first started using color heavily in the CLI. We disable color when the output is not a tty to avoid any issues with parsing the CLI output. We also added a --json option to many commands to make it easier to script the CLI with jq.

No Runtime Dependency

In v3, ensuring that we had a Ruby binary that didn't conflict with anything on the user's machine on all platforms was a big headache. The way it was done before also did not allow us to update Ruby without installing a new CLI (so we would've been stuck with Ruby 1.9 forever). We wanted to ensure that the new CLI didn't have a runtime dependency so that we could write code in whatever version of Ruby we wanted to without worrying about compatibility.

So Why Not All Go?

You might still be wondering why we didn’t reimplement both the plugins and core in Go (but maintain the same command syntax) to obviate our runtime dependency concerns. As I mentioned, originally we did want to write the CLI in Go as it provided extremely fast single-file binaries with no runtime dependency. However, we had trouble reconciling this with the goal of the plugin interface. At the time, Go provided no support for dynamic libraries and even today this capability is extremely limited. We considered an approach where plugins would be a set of compiled binaries that could be written in any language, but this didn't provide a strong interface to the CLI. It also begged the question of where they would get compiled for all the architectures.

Node.js for Plugins and Improved Plugin Architecture

This was when we started to think about Node as the implementation language for plugins. The goal was for the core CLI (written in Go) to download Node just to run plugins and to keep this Node separate from any Node binary on the machine. This kept the runtime dependency to a minimum.

Additionally, we wanted plugins to be able to have their own dependencies (library not runtime). Ruby made this hard as it's very difficult to have multiple versions of the same gem installed. If we ever wanted to update a gem in v3, we had to go out of our way to fix every plugin in the ecosystem to work with the new version. This made updating any dependencies difficult. It also didn't allow plugins to specify their own dependencies. For example, the heroku-redis plugin needs a redis dependency that the rest of the CLI doesn't need.

We also wanted to improve the plugin integration process. In v3, when we wanted the functionality from a plugin to go into the core of the CLI, it was a manual step that involved moving the commands and code into the core of the CLI and then deprecating the old plugin. It was fraught with errors and we often had issues come up attempting this. Issues were compounded because it usually wasn't done by a CLI engineer. It was done by a member on another team that was usually moving a plugin for their first time.

Ultimately we decided to flip this approach on its head. Rather than figure out an easy way to migrate plugin commands into the core, we made the CLI a collection of core plugins. In other words, a plugin could be developed on its own and installed as a “user plugin”, then when we wanted to deliver it to all users and have it come preinstalled, we simply declared it as a “core plugin”. No modifications to the plugin itself would be required.

This model provided another benefit. The CLI is now a modular set of plugins where each plugin could potentially be maintained by a separate team. The CLI provides an interface that plugins must meet, but outside of that, individual teams can build their own plugins the way they want without impacting the rest of the codebase.

Allowing these kinds of differences in plugins is actually really powerful. It has allowed developers on other teams and other companies to provide us with clever ideas about how to build plugins. We've continually been able to make improvements to the plugin syntax and conventions by allowing other developers the ability to write things differently as long as they implemented the interface.

Slow Migration

One thing I've learned from doing similar migrations on back-end web services is that it's always easier to migrate something bit-by-bit rather than doing a full-scale replacement. The CLI is a huge project with lots of moving parts. Doing a full-scale replacement would have been a 1+ year project and would have involved a painful QA process while we validated the new CLI.

Instead, we decided to migrate each command individually. We started out by writing a small core CLI with just a few lesser-used commands and migrating them from the v3 CLI to the v4 CLI one at a time. Moving slow allowed us to identify issues with specific commands (whether it was an issue with the core of the CLI, the command itself, or using the new API). This minimized effort on our part and user impact by allowing us to quickly jump on issues related to command conversion.

We knew this project would likely take 2 years or longer when we started. This wasn't our only task during this time though, so it enabled us to make continual progress while also working on other things. Over the course of the project, we sometimes spent more time on command conversion, sometimes less. Whatever made sense for us at the time.

The only real drawback with this approach was user confusion. Seeing two versions of the CLI listed when running heroku version was odd and it also wasn't clear where the code lived for the CLI.

We enabled the gradual migration from v3 to v4 by first having v3 download v4, if it did not exist, into a dotfile of the user's home directory. v4 provides a hidden command heroku commands --json that outputs all the information about every command including the help. When v3 starts, it runs this command so that it knows what commands it needs to proxy to v4 as well as what the full combined help is for both v3 and v4.

For 2 years we shipped our v4 Go/Node CLI alongside v3. We converted commands one by one until everything was converted.

Go/Node (CLI v5)

The v5 release of the CLI was more of an incremental change. Users would occasionally see issues with v4 when first running the CLI because it had trouble downloading Node or the core plugins. v5 was a change from downloading Node when the CLI was first run, to including Node in the initial tarball so it would be available when the CLI first loaded. Another change was that instead of running npm install to install the core plugins on first run, we included all the core plugins' Node files with the initial tarball and kept the user plugins separate.

Ruby to Node Command Conversion Complete

In December 2016, we finally finished converting all the commands into the new plugins-based CLI. At this point we modified our installers to no longer include the v3 CLI and the shim that launched the v4 or v5 CLI. Existing users with the CLI already installed as of this time will still be using the v3 CLI because we can't auto-update all parts of the CLI, but new installers will not include v3 and are fully migrated to v5 (or now, v6). If you still have the Ruby CLI installed (you’ll know if you run ‘heroku version’ and see v3.x.x mentioned), you’ll benefit from a slight speed improvement by installing the current version of the CLI to get rid of these old v3 components.

Pure Node (CLI v6)

In April 2017 we released the next big iteration of the CLI, v6. This included a number of advantages with a lighter and generic core written only in Node that could be used as a template for building other CLIs, and a new command syntax.

Leaving Go

While at Heroku we use Go heavily on the server-side with great success, Go did not work out well for us as a CLI language due to a number of issues. OS updates would cause networking issues and cross-compiling would cause issues where linking to native objects did not work. Go is also a relatively low-level language which increased the time to write new functionality. We were also writing very similar, if not exactly the same, code in Ruby and Node so we could directly compare how difficult it was to write the same functionality in multiple languages.

We had long felt that the CLI should be written in pure Node. In addition to only having one language used and fewer of the issues we had writing the CLI in Go, it also would allow for more communication between plugins and the core. In v4 and v5, the CLI started a new Node process every time it wanted to request something from a plugin or command (which takes a few hundred ms). Writing the CLI entirely in Node would keep everything loaded in a single process. Among other things, this allowed us to design a dynamic autocomplete feature we had long wanted.

cli-engine

Occasionally we would be asked how other people could take advantage of the CLI codebase for their own use — not just to extend the Heroku CLI, but to write entirely new CLIs themselves. Unfortunately the Node/Go CLI was complicated for a few reasons: it had a complex Makefile to build both languages and the plugins, it was designed to work both standalone as well as inside v3, and there was quite a bit of “special” functionality that only worked with Heroku commands. (A good example is the --app flag). We wanted a general solution to allow other potential CLI writers to be able to have custom functionality like this as well.

CLI v6 is built on a platform we call cli-engine. It's not something that is quite ready for public use just yet, but the code is open sourced if you'd like to take a peek and see how it works. Expect to hear more about this soon when we launch examples and documentation around its use.

New Plugin Interface

Due to the changes needed to support much of the new functionality in CLI v6, we knew that we would have to significantly change the way plugins were written. Rather than look at this as a challenge, we considered it an opportunity to make improvements with new JavaScript syntax.

The main change was moving from the old JavaScript object commands into ES2015 (ES6) class-based commands.

// v5
const cli = require('heroku-cli-util')
const co = require('co')
function * run (context, heroku) {
  let user = context.flags.user || 'world'
  cli.log(`hello ${user}`)
}
module.exports = {
  topic: 'hello',
  command: 'world',
  flags: [
    { name: 'user', hasValue: true, description: 'who to say hello to' }
  ],
  run: co.wrap(cli.command(run))
}

// v6
import {Command, flags} from 'cli-engine-heroku'
export default class HelloCommand extends Command {
    static topic = 'hello'
    static command = 'world'
    static flags = {
      user: flags.string({ description: 'who to say hello to' })
    }

    async run () {
      let user = this.flags.user || 'world'
      this.out.log(`hello ${user}`)
    }
}

async/await

async/await finally landed in Node 7 while we were building CLI v6. We had been anticipating this since we began the project by using co. Switching to async/await is largely a drop-in replacement:

// co
const co = require('co')
let run = co.wrap(function * () {
  let apps = yield heroku.get('/apps')
  console.dir(apps)
})

// async/await
async function {
  let apps = await heroku.get('/apps')
  console.dir(apps)
}

The only downside of moving away from co is that it offered some parallelization tricks using arrays of promises or objects of promises. We have to fall back to using Promise.all() now:

// co
let run = co.wrap(function * () {
  let apps = yield {
    a: heroku.get('/apps/appa'),
    b: heroku.get('/apps/appb')
  ]
  console.dir(apps.a)
  console.dir(apps.b)
})

// async/await
async function run () {
  let apps = await Promise.all([
    heroku.get('/apps/appa'),
    heroku.get('/apps/appb')
  ])
  console.dir(apps[0])
  console.dir(apps[1])
}

It's not a major drawback, but it does make the code slightly more complicated. Not having to use a dependency and the semantic benefits of using async/await far outweigh this drawback.

Flow

The CLI is now written with Flow. This static type checker makes plugin development much easier as it can enable text editors to provide powerful code autocomplete and syntax checking, verifying that the plugin interface is used correctly. It makes plugins more resilient to change by providing interfaces checked with static analysis.

2017-07-20 09

While learning new tools is a challenge when writing code, we've found that with Flow the difficulty was all in writing the core of the CLI and not as much in plugin writing. Writing plugins involves using existing types and functions so often plugins won't have any type definitions at all, where the core has many. This means we as the CLI engineers have done the hard work to include the static analysis, but plugin developers reap the benefits of having their code checked without having to learn much of a new tool if any.

Babel

Class properties and Flow required us to use Babel in order to preprocess the code. Because the process for developing plugins requires you to “link” plugins into the CLI, this allowed us to check if the code had any changes before running the plugin. This means that we can use Babel without requiring a “watch” process to build the code. It happens automatically and there is no need to setup Babel or anything else. All you need is the CLI to develop plugins. (Note that Node must be installed for testing plugins, but it isn't needed to run a plugin in dev mode.)

Improved Testing

Testing is crucial to a large, heavily-used CLI. Making changes in the core of the CLI can have unexpected impact so providing good test coverage and making tests easy to write well is very important. We've seen what common patterns are useful in writing tests and iterated on them to make them concise and simple.

As part of the new plugin interface, we've also done some work to make testing better. There were some gaps in our coverage before where we would have common issues. We worked hard to fill those gaps, ensuring our tests guaranteed commands were properly implementing the plugin interface while keeping the tests as simple as possible to write. Here is what they look like in comparison from v5 of the CLI to v6:

// v5 mocha test: ./test/commands/hello.js
const cli = require('heroku-cli-util')
const expect = require('chai').expect
describe('hello:world', function () {
  beforeEach(() => {
    cli.mockConsole()
  })
  it('says hello to a user', function () {
    return cmd.run({flags: {user: 'jeff'}})
      .then(() => expect(cli.stdout).to.equal('hello jeff!\n'))
  })
})

// v6 jest test: ./src/commands/hello.test.js
import Hello from './hello'
describe('hello:world', () => {
  it('says hello to a user', async () => {
    let {stdout} = await Hello.mock('--user', 'jeff')
    expect(stdout).toEqual('hello jeff!\n')
  })
})

The syntax is almost identical, but we're using Jest in v6 and Mocha in v5. Jest comes preloaded with a mocking framework and expectation framework so there is much less to configure than with mocha.

The v6 tests also run the flag parser which is why '--user', 'jeff' has to be passed in. This avoids a common issue with writing v5 tests where you could write a test that works but not include the flag on the command definition. Also, if there is any quirk or change with the parser, we'll be able to catch it in the test since it's running the same parser.

What's Next?

With these changes in place, we've built a foundation for the CLI that's already been successful for several projects at Heroku. It empowers teams to quickly build new functionality that is well tested, easy to maintain, and has solid test coverage. In addition, with our CLI Style Guide and common UI components, we're able to deliver a consistent interface.

In the near future, expect to see more work done to build more interactive interfaces that take advantage of what is possible in a CLI. We're also planning on helping others build similar CLIs both through releasing cli-engine as a general purpose CLI framework, but also through guidelines taken from our Style Guide that we feel all CLIs should strive to meet.

15 Aug 18:25

Twitter Favorites: [rcousine] So...this is happening in Toronto... https://t.co/GybzZmW3DR

Ryan Cousineau @rcousine
So...this is happening in Toronto... twitter.com/betterdwelling…
15 Aug 18:24

Samsung Galaxy Note 8 Dual Camera Setup Details Leak

by Rajesh Pandey
In just over a week from now, Samsung will be unveiling the Galaxy Note 8 at an event in New York. Among other things, the Galaxy Note 8 will be Samsung’s first device to feature a dual camera setup at its rear. Continue reading →
15 Aug 18:24

Taking Turns

by Jeremy Antley

Labyrinth: The War on Terror (2001–?), a board game released in late 2010, is one of the few games, analog or digital, to present a comprehensive, abstracted model of the Western powers’ approach toward combatting terrorism — what George W. Bush’s administration branded the “Global War on Terror.” It differs from other, more popular video games on the topic — such as the various iterations of Tom Clancy’s Rainbow Six franchise or the Call of Duty series — in that players do not run around in a first- or third-person perspective, toting automatic weaponry and assaulting terrorist strongholds. One is not immersed in the war-porn immediacy of violence; instead Labyrinth players use cards representing capabilities or historical events — like the recent Paris Attacks, the revelation of abuses committed by U.S. troops at Abu Ghraib, or the emergence of al-Zarqawi in the Iraq war of the mid-2000s — to move wooden pieces representing troops and terrorist cells across a stylized map of the Western and Muslim world.

Board games, no less than video games, are synthesized reflections of the society from which they spring

This antiseptic approach, though it may not overwhelm the senses or simulate real-time action, allows Labyrinth to offer a greater range of narratives beyond the fixed scripts of video games: Each play-through sees a novel configuration of events spun into a narrative that nonetheless re-creates and reinforces how those incidents first engrained themselves in Western culture. Designed as a two-player, U.S. vs. “jihadist” war game, Labyrinth revolves around controlling how certain nations regarded as pivotal to the “war on terror” are ruled. In the game, the U.S. player seeks to change the way Muslim nations are governed by suppressing the presumed incentives for supporting radicalism, while the jihadist player seeks to destabilize governance, making it easier to further their agenda — in the words of the game’s designers, “the ongoing bid by Islamic extremists to impose their own brand of religious rule on the Muslim world.” Sometimes the jihadists win, sometimes the U.S. wins, though the process by which this is decided differs from game to game.

Gameplay involves a cycle of action and reaction between the two players. For example, the jihadist, attempting to access to weapons of mass destruction, may try to recruit underground cells in Pakistan; the U.S. player may then respond by moving troops there from nearby Afghanistan. The jihadist player in turn may then decide to lay down “plot markers” in both Afghanistan and Pakistan. This would then force the U.S. player to use precious resources to “alert” both plots and remove them from play instead of conducting disruption operations against underground cells.

The game’s mechanics attempt to represent not the visceral intensity of combat but the logic of terrorism and strategies of asymmetric warfare as they are understood from a Western perspective. But board games no less than video games are synthesized reflections of the society from which they spring. So also encoded in Labyrinth’s narratives are Western assumptions regarding the “true” nature of terrorism, and Western heuristics for understanding the cause-and-effect chain associated with terrorism are meant to be contemplated through play.

All commercial war board-game design must find the sweet spot between verisimilitude and playability. To produce a two-player game that can be played in two to four hours requires a balancing act between design abstraction and embedded narrative: Each must reinforce the other. The resulting model must not only present a streamlined historical sense of cause and effect but also a simplified and potentially satisfying ideological reading of the conflict: Labyrinth winnows what terrorism is and can become into forms instantly recognizable by its anticipated players. If the game design is successful, play acts as a feedback loop for the reality it purports to simulate.

Right up front, Labyrinth makes clear how it links Islamic extremism with terrorism, often precluding other forms of violence and other actors from being perceived as terrorism or terrorists. The West is represented as the only guarantor of liberal ideals, with Islamic extremism (i.e., terrorism) best solved by force or through alignment with Western ideological perspectives. Jihadism is seen as the by-product of poor governance in the nation-states that foster its existence. There is little mention of the often exploitative or destabilizing relationship between Western powers and the Muslim world.

Labyrinth winnows what terrorism is and can become into forms instantly recognizable by its anticipated players. Play acts as a feedback loop for the reality it purports to simulate

This narrative framing found reinforcement with the release of Labyrinth’s sequel. In the rule book for the 2016 Labyrinth: The Awakening (2010–?), designer Trevor Bender claimed that the new game provided “up to date” events that allow “the game to continue to serve as an effective strategic level model of the ongoing struggles in the Muslim world.”

More so than any other material component, the games’ event cards distill Western attitudes about the Global War on Terror. They feature not only particular incidents but portraits of personalities, places, and particularities of the struggle, anchoring and extending the West’s perception of itself and the “other” it imagines it confronts. The cards not only familiarize Western audiences with what is deemed important by the designer specifically and Western culture generally, but they also posit a range of dramatic potentialities for events without upsetting the established historical narrative. Cards such as Lashkar-e-Tayyiba (“Kashmir insurgency spawns jihadists”) and Al-Ittihad al-Islami (“Somali jihadists”) re-create within the game the same perception that authorizes it as a whole: the belief, most prominent in the rhetoric of right-wing demagogues, that global-scale Islamic extremism is the only force that produces “real” terrorism and that jihadis may spring forth from any Muslim nation.

Each card distills large, sprawling subjects and historical incidents into a miniature, poised for deployment to trigger a dramatic moment of gameplay. The event cards become concentrated moments of emotional intensity in the montage of incidents that gameplay unfolds. Game effects are fused with the emotional affect associated with terrorist acts in the real world. But the event cards also prompt an understanding of terrorism only within the logical and predictable boundaries established by the game’s design. This allows what would otherwise be a messy outpouring of emotion to be funneled into an ideological channel within a controlled narrative form.

To play Labyrinth or its sequel is to seemingly engage in the Global War on Terror at a miniature scale. And miniatures, according to historian Hillel Schwartz in The Culture of the Copy, allow subtlety “with the play-full-ness of character” and extend us “toward second chances, times ahead and remediable.” Labyrinth players place and manipulate their miniatures on the game board in an attempt to seek not only understanding but also control and a sense of transcendence over the conflict depicted. Play becomes both reassuring and palliative, a means by which to relive and comprehend the Global War on Terror without experiencing its accompanying sense of helplessness or futility.


As much as Labyrinth and its sequel have in common in how they depict terrorism, their differences suggest ways in which Western ideas and anxieties have shifted. New event cards point toward the evolving role of technology in terrorism, and changing attitudes toward technology’s potential in preventing it.

In the original game, the card deck cast technology as a complementary asset that suited the ideological perspectives of their respective users. Labyrinth did not classify any technology-themed event cards as neutral; they were associated with either the U.S. or jihadist side. The technology cards available to the U.S. — Biometrics, Predators, the Intel Community, and Wiretapping — emphasized the use of technology in the form of surveillance by federal authorities on entire populations. By contrast, the jihadist technology cards — IEDs, Clean Operatives, and Martyrdom Operations depict technology as an asymmetric tactic that leverages surprise and the media’s propensity to amplify spectacle. So where the U.S.’s technology is determining — that is, it can be implemented at will and automatically yields results (surveillance captures its targets and produces relevant intelligence) — the jihadists’ technology relies on chance.

Where the technology cards available to the U.S. are determining and automatically yield results, the jihadists’ technology relies on chance

Certain technologies are seen as a force for good and thus inherently antithetical to the jihadist cause. Wiretapping and Biometrics in particular are treated as guarantors, capable of faultlessly distinguishing a harmful jihadist from a regular citizen of the West. Jihadist technologies are cast as crude and indiscriminate in comparison, reliant on haphazard violence and emotional affect rather than planned militaristic effect. The Western powers may use bombs, but they don’t use IEDs — at least according to Labyrinth.

Labyrinth: The Awakening broke with its predecessor by having some neutral technology types that belong exclusively to neither side. So while some of the exclusive technologies for both sides have been updated, the game also includes cards that suggest certain forms of technology can both facilitate and combat terrorism. This was perhaps driven by a desire to represent ISIS and its more sophisticated use of Western-sourced media techniques. The neutral event cards Cyber Warfare (“Hacker penetrates key systems”) and JV/Copycat (“Radicalization of international jihadists using the darkweb”) suggest that the increased development and reliance upon networked computing and the emergence of the internet of things have made these tools broadly accessible and not the exclusive domain of state actors.

Two other new cards suggest an evolved sense of jihadist capabilities, showing how the perceived lines between the West and Islamic extremists have blurred. A card for Snowden points to the dual-edged nature of homeland security efforts, how the U.S. deploys intensive surveillance against citizens to ostensibly protect their freedom. And a card for Unconfirmed reveals the limits of anti-terrorism campaigns prosecuted through air power or use of Special Forces alone. In the game, the Unconfirmed card represents not just mission failure but also calls into question whether submission of the enemy can ever be confirmed. There are always more targets.

Perhaps most telling is the new card for Jihadi John. It resembles the Jihadist Videos card from the original card set, in that each depicts jihadists staring into the camera, suggesting an intention of using the internet to spread propaganda. But while the Jihadist Videos card is a jihadist event and has imagery, text, and game effect that suggests the videos in question are meant for a predominantly non-Western, Muslim audience, the Jihadi John card — a neutral card — suggests something different. No longer an anonymous/ubiquitous extremist, Jihadi John is depicted as a celebrity, in every grotesque meaning of the word, whose decapitations are tailor-made spectacles for Western audiences.

It is as if the West cannot help but be captivated by the appearance of Jihadi John, even as it finds his actions abhorrent. He cannot be othered, even if his purpose is to clearly demarcate one culture from another, because his YouTube presence calls into question what it means to be other in the first place. His perfect English, his background and upbringing in the birthplace of the modern liberal order, appears to contrast with his avowed beliefs and demonstrates the relative failure of Western modernity to shape and produce its ideal citizens.

Here the streamlined history and simplified ideological reading of the conflict serves only to highlight the murkiness of self-reflection prompted by the desire for verisimilitude. Seeking understanding of Jihadi John in the form of a Labyrinth event card reveals not only the limits of the game’s design but also the limits of board games as a whole as technologies of representation. Successfully addressing the joint issues of playability and verisimilitude makes ideological indoctrination seamlessly pleasurable, but not all subjects — such as the use of technology as depicted in Labyrinth: The Awakening — can transition into simplified ideological forms. When this tension between playability and understanding becomes apparent, as it does with the Jihadi John event card, it upsets the pleasure of play and muddies the otherwise clear view of history the game tries to let players experience.

“Our self-portraits now neither anchor nor extend us because we are no longer sure of ourselves as originals, no longer sure of what it means to be inspirited,” Schwartz concluded about the Western fascination with self-depiction and its connection to introspective truth. The Labyrinth event cards are also uncertain portraits — mirrors, or miniature “personable doubles” that in trying to define the “other” actually define Westerners in their uncertainty. Schwartz identified “an ache for continual ‘correctness’ in an industrial society increasingly anxious about the passage of time,” and the new event cards try to present this correctness within the game’s necessarily simplified terms. But a nagging sense of inaccuracy remains.

Indeed, the shifting depiction of technology use between Labyrinth and Labyrinth: The Awakening point toward a conflict that is becoming more ill defined as it enters its 16th year. Efforts to update events are meant to sustain the ambiance of accuracy for Western players. But, as Schwartz reminds us, “the more we have demanded correctness … the more we have finished in dissemblance.” Players may look to Labyrinth or Labyrinth: The Awakening for better understanding of the Global War on Terror, but they will find only more questions about themselves.

15 Aug 18:24

Lightbeam goes virtual!

by princiya

Here is a gif from my latest aframe experiments for Lightbeam.

xk9wcnyeom

For Mozfest 2017, I had submitted the following proposal – ‘Lightbeam, an immersive experience‘. While the proposal is still being reviewed, I have been experimenting with Aframe and the above gif is an initial proof of concept 🙂

Here is the excerpt from the proposal:

What will happen in your session?

Lightbeam is a key tool for Mozilla to educate the public about privacy. Using interactive visualisations, Lightbeam’s main goal is to show web tracking, aka, show the first and third party sites you interact with on the Web.

In this session, the participants will get to interact with the trackers in the VR world thus creating an immersive Lightbeam experience. With animated transitions and well-crafted interfaces, this unique Lightbeam experience can make exploring trackers feel more like playing a game. This can be a great medium for engaging an audience who might not otherwise care about web privacy & security.

What is the goal or outcome of your session?

The ultimate goal of this session is for the audience to know and understand about web tracking.

While web tracking isn’t 100% evil (cookies can help your favourite websites stay in business), its workings remain poorly understood. Your personal information is valuable and it’s your right to know what data is being collected about you. The trick is in taking this data and shacking up with third parties to help them come up with new ways to convince you to spend money and give up more information. It would be fine if you decided to give up this information for a tangible benefit, but no one is including you in the decision.


15 Aug 18:23

OK Google, be aesthetically pleasing

by Alex Bate

Maker Andrew Jones took a Raspberry Pi and the Google Assistant SDK and created a gorgeous-looking, and highly functional, alternative to store-bought smart speakers.

Raspberry Pi Google AI Assistant

In this video I get an “Ok Google” voice activated AI assistant running on a raspberry pi. I also hand make a nice wooden box for it to live in.

OK Google, what are you?

Google Assistant is software of the same ilk as Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana. It’s a virtual assistant that allows you to request information, play audio, and control smart home devices via voice commands.

Infinite Looping Siri, Alexa and Google Home

One can barely see the iPhone’s screen. That’s because I have a privacy protection screen. Sorry, did not check the camera angle. Learn how to create your own loop, why we put Cortana out of the loop, and how to train Siri to an artificial voice: https://www.danrl.com/2016/12/01/looping-ais-siri-alexa-google-home.html

You probably have a digital assistant on your mobile phone, and if you go to the home of someone even mildly tech-savvy, you may see a device awaiting commands via a wake word such the device’s name or, for the Google Assistant, the phrase “OK, Google”.

Homebrew versions

Understanding the maker need to ‘put tech into stuff’ and upgrade everyday objects into everyday objects 2.0, the creators of these virtual assistants have allowed access for developers to run their software on devices such as the Raspberry Pi. This means that your common-or-garden homemade robot can now be controlled via voice, and your shed-built home automation system can have easy-to-use internet connectivity via a reliable, multi-device platform.

Andrew’s Google Assistant build

Andrew gives a peerless explanation of how the Google Assistant works:

There’s Google’s Cloud. You log into Google’s Cloud and you do a bunch of cloud configuration cloud stuff. And then on the Raspberry Pi you install some Python software and you do a bunch of configuration. And then the cloud and the Pi talk the clouds kitten rainbow protocol and then you get a Google AI assistant.

It all makes perfect sense. Though for more extra detail, you could always head directly to Google.

Andrew Jones Raspberry Pi OK Google Assistant

I couldn’t have explained it better myself

Andrew decided to take his Google Assistant-enabled Raspberry Pi and create a new body for it. One that was more aesthetically pleasing than the standard Pi-inna-box. After wiring his build and cannibalising some speakers and a microphone, he created a sleek, wooden body that would sit quite comfortably in any Bang & Olufsen shop window.

Find the entire build tutorial on Instructables.

Make your own

It’s more straightforward than Andrew’s explanation suggests, we promise! And with an array of useful resources online, you should be able to incorporate your choice of virtual assistants into your build.

There’s The Raspberry Pi Guy’s tutorial on setting up Amazon Alexa on the Raspberry Pi. If you’re looking to use Siri on your Pi, YouTube has a plethora of tutorials waiting for you. And lastly, check out Microsoft’s site for using Cortana on the Pi!

If you’re looking for more information on Google Assistant, check out issue 57 of The MagPi Magazine, free to download as a PDF. The print edition of this issue came with a free AIY Projects Voice Kit, and you can sign up for The MagPi newsletter to be the first to know about the kit’s availability for purchase.

The post OK Google, be aesthetically pleasing appeared first on Raspberry Pi.

15 Aug 18:23

Organizing around State

This is L-Systems with Redux and StateReflector, but now with more organization around updating the state and around efficiently updating the DOM to changes in the state. That is, a simple app for drawing L-Systems, where making changes to the parameters (Length, Angle) is stored in the URL fragment identifier, thus creating a permalink for any selections a user makes. The state of the page is kept in a Redux state store and we use StateTools.getDelta() to store that changing state in the URL fragment, and also pull the state out of the URL and push it into the state when the URL changes.

Length:
Angle:

The old render() function in our previous iteration was inefficent, every time 'state' changed all the elements were updated, even if the new value wasn't different from the old value.

To make this more efficient we can leverage (the newly renamed StateReflector to StateTools) StateTools.getDelta(), which returns the differences between the old state and the new state, so we only have to update elements that have changed. Similarly on the side of updating the state, we can consolidate some code around responding to events.

The markup for the page is straightforward:

And the JS is now much more compact also, coming in at just 73 lines of code:

What makes the main code so compact is the use of Redux for state management, and Binder, which is a small class for mapping state changes to and from HTML elements:

Binder is able to work because of some of the uniformity in how native HTML elements work, in particular that events are only generated via external stimuli. That is, merely changing the state of an element doesn't generate an event, for example, changing the checked value of a checkbox via the DOM does not generate an 'input' or 'changed' event.

This means that we should never encounter an infinite cascade of events. If an element generates an event and that changes the state, the new state will then be reflected in the elements, but since no new events are generted then the update cycle stops there. The general principles of not generating events on DOM changes, and announcing changes in state via events, are ones that should carry over when creating our own Custom Elements, which we'll get to in a later installment.

In general the code is fairly simplistic, I've only coded the happy-path, and since our 'StateTools.getDelta' algorithm is crude we can't really handle complex state objects, but there are plenty of other opensouce object diffing libraries that can be used to replace the simple version used here. OTOH, there is enough code there to prove out the idea, that building these one and two way bindings using state objects is not only feasible, but it dramatically improves the code and makes it easier to reason about the behavior of the system.

15 Aug 18:21

@elonmusk

@elonmusk:
15 Aug 18:21

"The great divider of the left in America has always been racism."

“The great divider of the left in America has always been racism.”

-

| Sarah Leonard, cited by Dylan Matthews in Inside Jacobin: how a socialist magazine is winning the left’s war of ideas

We have no US equivalent of a labor or socialist party in our politics, because the working class cannot find solidarity across racial lines. Race is used by neoliberal right and left (GOP and Dems) to divide the working class.

Perhaps now in the postnormal, we can find a way through by fluidarity – where we don’t need to agree on everything to agree on a few things.

15 Aug 18:18

Merck, Under Armour, Intel CEOs bolt from Team Trump

mkalus shared this story from The Globe and Mail - U.S. Politics.

For U.S. business leaders, navigating President Donald Trump’s controversies is proving to be a thorny task.

Kenneth Frazier, the chief executive officer of pharmaceutical giant Merck & Co., resigned from the Trump administration’s manufacturing council early on Monday, in protest against the President’s weak initial response to violence at a white-supremacist rally in Virginia.

By doing so, Mr. Frazier became the latest in a growing line of corporate executives to accept informal advisory roles with the White House, only to walk away when Mr. Trump’s actions or policies proved to be too contentious.

Under Armour CEO Kevin Plank and Intel CEO Brian Krzanich also resigned from the manufacturing council on Monday.

“America’s leaders must honour our fundamental values by clearly rejecting expressions of hatred, bigotry and group supremacy,” Mr. Frazier said in a statement that was posted on Twitter.


But the majority of Corporate America has remained silent or has denounced racism without criticizing the President’s actions.

After other business leaders were pressed to follow Mr. Frazier’s lead and resign from the manufacturing council, the group’s head, Dow Chemical Co.’s chief executive Andrew Liveris, took to Twitter to reject hatred, racism or bigotry.

But Mr. Liveris did not resign from the council, nor did he criticize how Mr. Trump reacted to the white-supremacist rally in Charlottesville, Va., where a man is accused of driving his car through a crowd of counterprotesters, killing one woman and injuring others.

Mr. Trump’s initial response was to say he condemned the “egregious display of hatred, bigotry and violence on many sides.”

However, he did not personally denounce neo-Nazis, the Ku Klux Klan and other white supremacists by name until about 48 hours after the rally and a few hours after Mr. Frazier announced his resignation on Monday.

Likewise, Campbell Soup Company said racist ideology must be condemned, but said its chief executive would remain on the manufacturing council because it was important to provide input on matters that affect the soup maker’s industry.

The head of private equity firm Blackstone, Stephen Schwarzman, also rebuked the “bigotry, hatred and extremism,” and did not resign from his presidential advisory role: “As the President said today, I believe we need to find a path forward to heal the wounds left by this tragedy and address its underlying causes.”

Corporate crisis managers said companies are starting to believe that it is safe to repudiate the President.

“We are approaching a tipping point where it will be more important for companies to act sooner rather than later,” said Richard Levick, founder of Levick Strategic Communications and a veteran crisis manager.

In the first few months of Mr. Trump’s presidency, Mr. Levick said he was getting client inquiries on how to stay on the good side of Mr. Trump’s tweets.

After Merck announced that Mr. Frazier was leaving the council, Mr. Trump quickly responded by tweeting the executive “will have more time to LOWER RIPOFF DRUG PRICES!”

Now, Mr. Levick said, his clients, which include Fortune 500 companies, are starting to feel safer about speaking out.

Meg Whitman, the chief executive of Hewlett Packard Enterprise, praised Mr. Frazier’s decision and said on Twitter: “I’m thankful we have business leaders such as Ken to remind America of its better angels … Americans expect their political leaders to denounce white supremacists by name.”

Lloyd Blankfein, the chief executive of Goldman Sachs who has called the President’s decision to bail on the climate-change deal a set back, tweeted: “Lincoln: ‘A house divided against itself cannot stand.’ Isolate those who try to separate us. No equivalence w/ those who bring us together.”

Since Mr. Trump won the presidency, companies have been trying not to anger Mr. Trump while ensuring that their brands are not tarnished on social media.

Companies such as Carrier Corp. and Ford Motor Co appeared to manipulate Mr. Trump by announcing plans to keep certain manufacturing jobs in the U.S.

Apart from Monday’s resignations, this latest crisis has not inspired many business leaders to speak out.

“I suspect that corporate executives are hopeful that the Republican Party can pass tax cuts soon … and they believe criticism of Trump undercuts the chance those goals are achieved,” said Jeff Hauser,

who runs the Revolving Door Project, which examines corporate influence in the executive branch.

“American headquartered companies increasingly spend more branding themselves progressive via advertising and public relations while exhibiting conscientious corporate behaviour with declining frequency,” he said.



Trump vs. Corporate America

A selection of the President’s tweets taking aim at U.S. businesses

15 Aug 18:17

Rebel Media co-founder quits over company's perceived ties to right-wing groups

mkalus shared this story from The Globe and Mail - Life.

Brian Lilley, a co-founder of the upstart conservative online news and opinion outlet The Rebel Media, quit the company on Monday, saying that he was not comfortable working for an organization that, “is being increasingly viewed as associated with the likes of [white supremacist] Richard Spencer.”

In a Facebook post, Mr. Lilley noted that The Rebel, based in Toronto, had arisen from the ashes of the right-wing cable channel Sun News in February, 2015, to serve “an audience looking for informed commentary from a small c conservative perspective,” but he was no longer confident it was doing that.

The departure, he wrote, was “a long time coming. What may have started as a concern over the harsh tone taken on some subjects came to a head with this weekend’s events in Charlottesville, Virginia. What anyone from The Rebel was doing at a so-called ‘unite the right’ rally that was really an anti-Semitic white power rally is beyond me.”

Related: Advertisers bow to pressure to pull ads from The Rebel

Faith Goldy, one of The Rebel’s marquee commentators, provided extensive coverage of the weekend’s events, and was hosting a live-stream of a march as a car slammed into a group of demonstrators, leaving one woman dead and 19 people injured.

On Monday, the company’s co-founder and “Rebel Commander” Ezra Levant issued a memo attempting to distance the company from the so-called alt-right, arguing that the movement had drifted from its origins and “now effectively means racism, anti-Semitism and tolerance of neo-Nazism.” He alleged that Richard Spencer, a white supremacist who attended the rallies in Charlottesville, was now “the leading figure – at least in terms of media attention” of the movement.

As the movement burst into prominence last year with the election of Donald Trump – his chief strategist, Steve Bannon, had been the head of Breitbart News, a far-right American outlet seen as one of the movement’s most prominent voices – critics noted the term was already widely used to refer to a collection of groups or individuals espousing racist, fascist, or white-supremacist ideologies.

In his Facebook post, Mr. Lilley wrote: “I was never enamoured by the ‘alt-right’, never saw the appeal but I take Ezra at his word when he describes his evolution. But just as he has evolved, just as The Rebel has evolved, so have I and the uncomfortable dance that I have been doing for some time now must come to an end.

“What The Rebel suffers from is a lack of editorial and behavioural judgment that left unchecked will destroy it and those around it. For that reason, I am leaving.

“As a serious journalist with nearly 20 years’ experience at the highest levels in this country, and abroad, I cannot be a part of this.

“I am not comfortable being associated with a group that, rightly or wrongly, is being increasingly viewed as associated with the likes of Richard Spencer. Like many of you, I had family that fought the Nazis, I never want to be in the same room as one. I am also not comfortable with the increasingly harsh tone taken on issues like immigration, or Islam. There are ways to disagree on policy without resorting to us versus them rhetoric.”

After helping start The Rebel, last year Mr. Lilley began hosting a three-hour nighttime talk show on CFRA-AM, an Ottawa-based talk radio station owned by Bell Media. He moved to a freelance arrangement with The Rebel, and reduced his output from approximately three videos per day to one per day.

In a statement e-mailed to The Globe and Mail on Monday evening, Ezra Levant said: “We love Brian and can hardly wait to see what he does next.” He chalked up Mr. Lilley’s decision to pressure from Bell Media.

“I don’t doubt he’s getting flak from his other mainstream media employers, who are our competitors. Bell, Corus and Postmedia have all been pressuring our common talent to choose either them or us,” he said.

Asked about Mr. Lilley’s contention that The Rebel had lost its way, Mr. Levant responded, “I think that is absolutely the correct thing for someone to say who works for CTV/Bell. Brian’s no dummy.”

In an interview with The Globe, Mr. Lilley chuckled when told about Mr. Levant’s comments. “I made this decision on my own,” he said. “No one talked to me, no one coerced me, no one suggested this. I just no longer felt comfortable with where The Rebel is going.”

He added: “I do wish them well. I hope they right their ship.”

Report Typo/Error

Follow Simon Houpt on Twitter: @simonhoupt

15 Aug 18:10

Huddle video messaging app promises a genuine digital safe space

by Sameer Chhabra
An image showing the Huddle app on an iPhone

Imagine that you’re living with depression; who would you talk to about your concerns? A doctor? A family member? A close friend?

Most people wouldn’t think to turn to the internet, especially since most websites aren’t the best places to speak candidly about personal matters, and definitely not to speak candidly face-to-face.

Yet that’s exactly what Dan Blackman and Tyler Faux hope that smartphone users will be able to do with the Huddle app.

Huddle’s an application aimed at individuals currently experiencing difficulties with mental health or addiction, who are looking for a safe space to share their thoughts and feelings with a supportive group of listeners.

Huddle’s key feature is that users communicate through video chat — not text. They can join public groups to communicate openly and they can create private groups with other Huddle users.

Blackman and Faux want their users to be able to have genuine, supportive, face-to-face conversations with one another, without worrying about criticism or bullying.

“In-person, peer-to-peer support really works,” said Blackman, in an interview with MobileSyrup. “We want to create that brand where someone can go and open up.”

A curious thing happened on the way to the forum

Blackman and Faux first met while working at Tictail, a Swedish e-commerce website that’s quite different from the startup the two co-founded together.

However, Blackman says that he has a personal connection with the kinds of issues he hopes Huddle users will discuss.

“Growing up, my dad actually dealt with an addiction to alcohol,” said Blackman. “One of the big things I learned was that the idea of labeling himself or even talking openly to a doctor is one of the reasons why he never got the help he needed.”

As for Faux, he has experience with friends and family members who thrived by receiving support from their own friends and family — a network of their peers.

“We wanted to recreate that in a digital platform.”

When Huddle was in its early stages, Blackman says that he and Faux spent quite a bit of time researching the options available for people looking for help.

“We couldn’t really find anything,” said Blackman.

There were lots of text-based “white canvas” forums and there were lots of crisis hotlines that people could call to speak with professionals, but there weren’t really any face-to-face options that allowed people seeking help to speak with other people.

“We wanted to recreate that in a digital platform,” said Blackman.

Huddle up and help people

Blackman, a Tumblr alumnus, is the first to point out some of the problems with modern social networks.

He says that there’s a lot of bullying and a lot of people using their anonymity to hurt people looking for help.

He also says that in spaces like Tumblr, it’s easy for users to co-opt the narratives of other people and turn them into personal stories.

Worse, it’s easy for people to amass a network based solely on bullying those people who do choose to share their experiences.

Unlike other social networks, Blackman believes that people will come to Huddle to provide help and to seek it as well.

An image showing a user browsing through different groups on the Huddle app

“We think of our users as two different types of people,” said Blackman. “Those who want to empower and support and tell their stories… [and those who] are a bit more vulnerable.”

So far, Blackman’s noticed that about 60 percent of users trend towards the support role, while roughly 40 percent come to Huddle to seek aid.

“One of our core values for Huddle is safety.”

It’s also because of those people looking for help that Huddle allows its users to obscure and protect their identities.

Most Huddle communications are carried out through video, but the app allows users looking for anonymity to pixelate and hide their faces.

“One of our core values for Huddle is safety,” said Blackman. “We’re not going to be able to thrive as a company without making those safe spaces.”

An image showing a Huddle user's video becoming more pixelated to obscure his identity.

To that end, Huddle utilizes a number of user moderation and administrator moderation tools that Blackman says he’s surprised more social networks don’t employ.

“One is secure login — making people sign up through a real identity [like] Facebook or a mobile phone number,” said Blackman. “If someone is a bad player, we can kick them out — they’ll need to create a new phone number of Facebook account to come back in.”

“I’m sure there are people who would go to those lengths… but it’s a big barrier to get back in.”

Empowering others through the internet

A lot of companies make the claim that they’re products will make the world a better place, but few truly take those steps in everything they do.

For Dan Blackman and Tyler Faux, their company is founded on that simple desire to help other people.

“I think pretty much from the beginning, Tyler and I are both product-makers at heart and we both really believe in what we’re doing,” said Blackman.

“Ultimately for us, we’ve worked on a number of different companies before this and we’re just genuinely excited to create something that can help people.”

Huddle is available to download for free on iOS.

The post Huddle video messaging app promises a genuine digital safe space appeared first on MobileSyrup.