iA Writer for Windows joins the family. You can download it right from the homepage without going through the Windows store. It's also supported on older versions of Windows. Give it a spin. You can try before you buy.
Earlier today, we released Drupal 8.5.0, which ships with improved features for content authors, site builders and developers.
Content authors can benefit from enhanced media support and content moderation workflows. It is now easier to upload, manage and reuse media assets, in addition to moving content between different workflow states (e.g. draft, archived, published, etc).
Drupal 8.5.0 also ships with a Settings Tray module, which improves the experience for site builders. Under the hood, the Settings Tray module uses Drupal 8.5's new off-canvas dialog library; Drupal module developers are encouraged to start using these new features to improve the end-user experience of their modules.
Finally, Drupal 8.5 also ships with significant improvements for Drupal 7 to Drupal 8 migration. After four years of work, 1,300+ closed issues and contributions from over 570 Drualists, the migrate system's underlying architecture in Drupal 8.5 is fully stable. With the exception of sites with multilingual content, the migration path is now considered stable. Needless to say, this is a significant milestone.
What I'm probably most excited about is the fact that the new Drupal 8 release system is starting to hit its stride. The number of people contributing to Drupal continues to grow and the number of new features scheduled for Drupal 8.6 and beyond is exciting.
In future releases, we plan to add a media library, support for remote media types like YouTube videos, support for content staging, a layout builder, JSON API support, GraphQL support, a React-based administration application and a better out-of-the-box experience for evaluators. While we have made important progress on these features, they are not yet ready for core inclusion and/or production use. The layout builder is available in Drupal 8.5 as an experimental module; you can beta test the layout builder if you are interested in trying it out.
I want to extend a special thank you to the many contributors that helped make Drupal 8.5 possible. Hundreds of people and organizations have contributed to Drupal 8.5. It can be hard to appreciate what you can't see, but behind every bugfix and new feature there are a number of people and organizations that have given their time and resources to contribute back. Thank you!
Building A World Class Community Management Team: A System for Benchmarking Online Community Skills And Abilities
Imagine you decided to move into sales and on your first day, someone handed you a list of the organization’s top customers and responsibility for the entire CRM system.
No training, no support, no roadmap.
This is pretty close to what happens in community management today. Most people are suddenly handed responsibility for building a community from an organization’s top customers on an advanced technology platform.
Often they have limited training, support, or a detailed roadmap.
At worst, it leads to empty ghost towns or pointless casinos (communities with lots of meaningless engagement).
The level of training given to community teams today is abysmal. It’s the root cause of most of the problems you and your team are facing.
It’s important to make continual progress of your community team a priority. Your team, your members, and your organizations deserve better the best. In this post, we’re going to highlight how to benchmark yourself and your current team.
We’re going to identify the skills they need and how you can set reasonable targets for each of them.
Benchmarking The Community Team
We sometimes receive emails asking if one of our courses is right for a participant.
This is a hard question to answer without knowing someone’s ability. Most people don’t know how good they are because they have no benchmarks to measure themselves against. They use the size of their community rather than their own abilities.
We benchmark community professionals along five attributes (adapted from our friends at the community roundtable). These are below:
1) Strategy. This is the ability to develop and execute a community strategy which deploys the organization’s limited resources to maximum impact.
2) Engagement. This is the ability to proactively engage, nurture top members, and build systems to improve the overall participation environment of the community.
3) Content. This is the ability to create original content and drive high-value contributions from other members.
4) Technical. This is the ability to select, implement, and optimize a community platform. This includes resolving technical problems and managing vendor relationships.
5) Business. This is the ability to build allies throughout the organization, measure value, run a community team, and gather more resources for the community.
We break each of these down by four distinct levels ranging from ok to world-class.
You should strive to gradually upgrade yourself and the community team to a world-class level in each of these areas.
This is subjective, but I recommend copying and adapting our benchmarking resource below.
Score each member of staff between 0 to 4 on each of the five attributes.
We’re going to break down each of these levels below:
1) Strategy (or strategic thinking)
When benchmarking someone’s strategic abilities, you want to track their journey from thinking strategically to having a codified and invaluable strategy everyone understands and supports. This increasingly relies on research, metrics, and project management skills.
Strategy is about allocating resources to have maximum impact. It’s not about trying to do as many things as possible, but deciding what’s worth doing and allocating maximum resources to ensure success.
Or, to use an analogy, it’s not about dividing resources evenly to fight every battle, but about deciding which battles are worth fighting (see this big wins talk).
The first level is to ensure all staff members are thinking strategically about how they spend their time. Are they constantly using available data to reviewing which activities they undertake are driving results and pursuing those which are most effective?
As your staff progress, you want them to be proactively researching what members want and using that data to improve the community. They should know how their tactics serve a strategy which serves an objective which serves a goal.
At the more advanced level, you want them to become great at segmenting members, ensuring members are making their most valuable contributions, establishing benchmarks, and pursuing reasonable community goals. The strategy should be internalized here throughout the entire community team with modeling of different inputs to achieve goals.
Engagement skills are the core abilities of the community team to create the perfect environment for every member to make their best possible contribution to the community.
This begins at the lower levels with being a terrific community member.
Do your community staff resolve and escalate problems well?
Can they remove the bad material quickly?
Do they build positive relationships with community members?
As they move up the chain, they should focus on building systems which build a powerful sense of community and nurture superusers among the group. They should get better at building and optimizing the journey which turns newcomers into regular, active, members.
At the highest levels, you and your team need to be able to improve the resolution rates, address legal/brand issues, and ensure all staff know how best to engage members in the community.
Content is one of the areas where everyone considers themselves an expert (aren’t you a great writer?). Content is essentially the ability to develop and facilitate the creation of valuable long-form educational or entertaining content across blogs, webinars, videos etc…
At the simpler levels, all members of your team should be able to synthesize great content from existing work and member contributions. This should be nicely designed and implemented across multiple platforms.
As you improve, you should be able to optimize web copy and improve conversion rates, increase search traffic, and build an editorial calendar. This requires reasonable copywriting and SEO abilities.
At the higher levels, your team should be able to ensure an editorial calendar is adhered to (surprisingly hard), develop automation campaigns, edit contributions of other members and persuade top experts to create great content for your community.
Finally, a great community professional should be able to commission ‘big win’ content projects (e.g. our platform selection tool) which goes far beyond a simple blog post or video. It brings in a unique, viral, idea to attract great search traffic, and has a unique design/development.
Far too many people in this field profess to ‘not being technical’. This isn’t good enough when your entire work depends upon being adept at managing a technology platform and what happens within it.
At the basic level, this requires knowing how the features of the platform work and being able to diagnose any potential problems which arise. Your staff should be able to learn this by testing things, experimenting in the community, and asking around in the vendor’s relevant communities.
Beyond this, you need your team to be able to resolve most issues independently, run SQL queries to get the data you need and make improvements to the structure and design of the community without help. This kind of knowledge is best gained through peer support.
As you reach the more advanced level, you want to know more about the platform environment, using data to improve the speed and functionality of the platform, and using the best features from any third party platforms to build the best community possible.
Finally, you want to be able to take responsibility for the entire vendor process and address legal/privacy/security issues which can arise.
Business skills are the link between the community and the organization. This is about you being able to make the community indispensable to the organization. This begins with knowing how the community is supposed to help the business and the resources available to develop that community.
As you get better, you should become more adept at acquiring more resources by building a strong internal narrative and persuasively winning over any skeptics and key stakeholders within the organization.
Finally, this evolves into being able to attract and retain world-class community talent, build career maps, and build a community-first culture among the organization. The very best people I know have a pipeline of people eager to work for them. This doesn’t happen by accident, it happens by doing what one friend calls ‘building pipe’, constantly showing up, making connections, and knowing what talent you’re looking out for.
Set A Target To Improve Every 3 To 6 Months
Now you have your benchmarks, you can begin to set reasonable targets of improvement for each member of your team.
A reasonable level of progress is an increase in 1 level (of 1 attribute ) every 2 to 3 months at the first two levels and usually 3 to 6 months at the upper levels.
This gives your team a clear focus and lets you build a roadmap of what you expect over 6 to 12 months. This, in turn, lets you identify what kind of support you need to provide the community team to help them reach each level.
Avenues for Progress
Courses aren’t the only method of improving your community team or improving your own abilities. There are multiple channels available here, each can work in different situations.
- Professional courses. I’d recommend our Strategic Community Management and Psychology of Community courses for strategy/engagement development.
- Books. Focus on specific topic areas, not pop. business books. This tends to be good for content skills and some marketing/growth abilities.
- Conferences. This is good for content/SEO skills, some engagement skills, and some highly focused areas. It’s good for building relationships which can help in other areas too.
- Blogs. These are fantastic for most areas, especially psychology, marketing, SEO, and analytics. Find the right expertise here.
- Peer communities. Both industry sites like CMX, Community Roundtable, FeverBee Experts, but also communities in each unique field like technology, journalism, copywriting, marketing, SEO etc…Encourage your team to identify problems, ask questions, and get help. Small peer groups of people working in similar communities is also a good idea.
- Mentoring and support. This covers both informal mentoring and professional options. This is best for business skills, strategy, and some technical expertise.
- Experimenting. Especially in technical (use a sandbox!) and some areas of engagement. You can run small trials to see what does or doesn’t work.
This is a process that never ends. The goal is to set benchmarks, track progress, and push for ongoing, non-stop, improvement from every member of the community team. Set a skills roadmap for every person you work with and compel both of you to review it every 3 months.
"Digital first" is the philosophy of imbeciles who know the answer before they know the question.
They know the treatment before they know the condition. They know what tool to use before they know what's broken. Imagine a doctor whose philosophy is "appendectomy first." He knows the cure before he knows the disease.
There is no other industry that would accept such manifest stupidity. But it is not just alive in our industry, it is commonplace.
There are a few reasons why this idiocy exists. First, and most understandable, is that it's what some people were taught. They learned it in school and have sought to learn nothing new since. They have made a practice of interpreting the world through its myopic lens.
Believing in the rapidly decomposing digital fantasy (see this and this,) they never bothered to acquire any other advertising or marketing knowledge. If the only tool you have is a hammer...well, you know the rest.
These people are ignorant, but it is usually an honest ignorance.
But there's another group of "digital firsters" who are not nearly as ignorant and not nearly as honest. They are the ones who put digital first because it is more lucrative. They have found that they can make more money buying digital advertising than traditional advertising. It doesn't really matter to them what's best for you, they know what's best for them.
Sadly, reading between the lines of the ANA's media transparency investigation, some agencies seem to fit nicely into this box. They have no ideological commitment to digital, they have an ideological commitment to money.
Mark Ritson wrote a couple of compelling pieces about this recently in The Australian. Unfortunately, The Australian is behind a paywall so I can't link you to his pieces. After quoting a few media experts who assert that...
- commissions on traditional media usually run the in the 3% range... Prof. Ritson concludes...
- commissions on digital media run in the 7-10% range
- because of automation, digital media are no longer any more difficult or time consuming to buy than traditional media
- agencies often set digital media buying quotas for media buyers to meet
"...let's also accept something that no one in the industry wants to talk about: that digital media gets a much greater share of the pie than news media (print) because it is more profitable for the agencies that recommend it."
"...The simple marginal profit that agencies make from digital media is almost triple what they would get from channeling the money to news media."As I said in my Type A Group Newsletter last week, Prof. Ritson is too wise and prudent to make outright accusations. I'm not. There is no doubt in my mind that to some agencies "digital first" is just code for "me first."
Does this mean that everyone who recommends a digital media buy is a doofus or a crook? Of course not. There are circumstances when an online buy is a perfectly reasonable recommendation. But anyone with a functioning brain will consider what the problem is before he recommends a solution. Anyone who starts with the solution -- e.g., "digital first" -- is a fool.
Here's a surefire litmus test for determining who you're dealing with. If they have the answer ready before you tell them the question, they're either imbeciles or opportunists.
September 2016 marked the start of Apple's latest battle in what has been a multi-decade war. This newest battle was going to look and sound different. Apple had unveiled new iPhones lacking dedicated headphone jacks. The controversial move was criticized by many as a sign of Apple going too far in flexing its power and killing off legacy technologies for the sake of change. However, Apple's move wasn't about headphone jacks or even iPhones. Apple had made a big bet regarding the future of "sound on the go," and headphone wires were the enemy. We now see Apple unveiling its revised strategy for controlling sound in the home. The best way to analyze products like AirPods and HomePod is to look at them as the latest weapons in Apple's battle for controlling sound in our lives.
Apple's motivation in controlling sound is based on delivering impactful and memorable user experiences. Accordingly, music has played a fundamental role in Apple's sound strategy for the past two decades. Listening to a particular song can mentally remove someone from his or her surroundings. Music is one of the few things capable of fostering such strong emotional connections and experiences. It's a safe bet to assume music will be around for a very long time while music consumption will remain a key task handled by our computing devices.
There are two parts to Apple's strategy for controlling sound:
- Sound on the Go
- Sound in the Home
Sound on the Go
Apple is no stranger to selling devices designed to deliver sound. However, the iPod marked the beginning of Apple's quest to control sound on the go. Positioned as a "breakthrough digital device," the iPod changed the way we consumed music on the go, offering a much better experience than existing mobile listening options at the time. In what is now difficult to comprehend, the iPod effectively put an end to not being able to have your entire music library in your pocket.
The first iPod commercial highlighted the device's mobility as the user danced around his house while listening to music via iPod and white earbuds. The kicker was found at the end as he stepped outside the four confined walls of his home and into the outside world without missing a beat. The iPod was about consuming sound not just around the home, but more importantly, outside the home.
Over the subsequent years, Apple went on to unveil a number of different iPods, some of which turned out to be more popular than others. However, Apple was just getting started when it came to controlling sound on the go. Sensing that the iPod's long-term threat was found with people listening to music on smartphones, Apple began work on a much more ambitious product: iPhone.
As seen in Exhibit 1, the iPhone changed the course of Apple's sound on the go strategy. While the iPod was thought to be Apple's first mass-market product, the iPhone went on to redefine what it meant to be truly mass market. When Apple unveiled iPhone, the company was selling 50M iPods per year. Apple is currently selling 215M iPhones per year. Even though iPhone was much more than a dedicated music player, Apple never let go of its deep interest in music.
Exhibit 1: Apple "Sound on the Go" Devices Unit Sales (iPod and iPhone)
AirPods mark the latest step in Apple's sound on the go strategy. The device is born out of the belief that there isn't a place for wires in a wearables world. AirPods were initially criticized for their unusual looks, but those concerns have quickly disappeared. Whereas wireless AirPods may have looked odd to some, having wires hanging out of people's ears will eventually look out of place.
Based on sales, AirPods have been a resounding success. According to my estimate, Apple sold approximately 11M pairs of AirPods in 2017. This positions the device as the third best-selling Apple product out of the gate, behind iPad and Apple Watch.
Exhibit 2: AirPods Unit Sales
AirPods sales momentum is poised to continue over the next few years. As Apple Watch achieves greater independency from iPhone, AirPods will play a crucial role in delivering sound to tens of millions of Apple Watch users. Apple will reportedly unveil updated AirPods later this year. The runway for AirPods is long with a list of potential features that could include everything from health tracking to noise cancelling and augmented hearing. The big question found with future AirPods is, which features will get the green light? Apple is also rumored to unveil new noise-cancelling, over-ear headphones in a move suggestive of Apple expanding its line of wireless headphones.
Sound in the Home
It's easy to look at HomePod as Apple's foray into controlling sound in the home. In reality, the company's sound in the home strategy started with a whimper in 2006 with the iPod Hi-Fi speaker. The speaker was tasked with reinventing the home stereo for the iPod age by being positioned as a companion product to iPod. Apple ended up pulling the plug on the device after just 19 months of sales.
Twelve years later, Apple is giving sound in the home another try with HomePod. There are key differences in strategy this time around. Whereas iPod Hi-Fi was meant to enhance the iPod and iTunes ecosystem, HomePod and its A8 chip are being given a more ambitious goal of reinventing sound in the home by bringing computational audio to the masses. HomePod scans the room it's located in and then tailors sound output to that room. However, Apple isn't using computational audio to underpin its initial HomePod marketing campaign. Instead, Apple is relying on emotion, a page taken directly from the iPod, iPhone, and even AirPods playbooks. The latest HomePod "ad," a 4-minute film directed by Spike Jonze, is striking in theatrics. However, the thing that instantly jumped out to me about the video is how similar it is to the original iPod ad.
In both, we see people enjoying music using Apple devices. While one is listening to an iPod to get him pumped up to leave the house and experience the outside world, the other is listening to HomePod after coming home following a tough day. In both examples, the people lose themselves in the music experience produced by an Apple device.
Apple Music and Beats
Instead of being a revenue or profit driver, Apple Music serves as the glue in Apple's quest to control sound. While Apple's Beats acquisition was driven by music streaming and buying into Jimmy Iovine's overall music vision, Apple probably didn't mind getting a popular headphones brand in Beats. It's not as if Apple was unfamiliar with the power found with headphones (not to mention the branding opportunity).
The fact that Apple kept the Beats brand for headphones speaks to how Beats headphones are likely serving a different target market. In fact, Apple has positioned Beats headphones as a compliment to AirPods. This has likely gone a long way in removing the oxygen from the wireless headphone category market and preventing competitors from establishing any kind of beachhead.
Elephant in the Room
Apple's strategy for controlling sound in the home seems to have met its match in the form of Amazon Echo Dot and Google Home Mini. Whereas iPod, iPhone, AirPods, and Beats are personal devices delivering sound to individual users, cheap stationary smart speakers powered by digital voice assistants are shaping up to be more about communal experiences. In addition, the value found with an Echo or Google Home isn't derived from sound quality but rather from the intelligence of the digital voice assistant that lives in the cloud. This has led the tech community to think Apple misfired by positioning HomePod as a high-quality music speaker.
I see things differently.
The rise of digital voice assistants like Alexa and Google Assistant have seemingly redefined a stationary speaker's purpose so it's now about delivering intelligence rather than sound. The implication here is that the stationary speaker part of the equation is temporary in nature. If the same intelligence can be delivered to the user via another way, say via smart glasses, a smartwatch, or a pair of wireless headphones, low-end stationary speakers lose value. This idea serves as the basis for why I think the current stationary speaker narrative is off the mark. Apple looks at a stationary speaker as a tool capable of delivering intelligent sound. This use case likely won't change any time soon.
With HomePod, Apple isn't selling a high-quality music speaker. Instead, Apple is selling a new kind of music experience - one that isn't able to be produced with mobile devices, low-end speakers used for digital voice assistants, or even high-end speaker systems that may exceed $1,000. This music experience consists of a music streaming service, a digital voice assistant, and a combination of hardware and software that allows HomePod to map its surroundings and adjust sound output accordingly.
iPod did not become popular because it offered vastly superior sound quality on the go. Instead, it became a hit because it offered a better all-around music listening experience versus the competition. In addition, fashion began to matter with iPod and the accompanying white earbuds. We see a similar dynamic take place with AirPods. Millions of people aren't buying AirPods because of their superior sound quality. Instead, AirPods just work and offer a great user experience. Similar to iPod, AirPods are also seeing building momentum in fashion. AirPods are becoming the new cool, fashionable item that people want to be seen wearing on the street.
Apple is running away with its controlling sound on the go strategy. The company has no legitimate competition in the wireless headphone market. While this may change, it's not clear where that competition will come from. Meanwhile, Apple's positioning with controlling sound in the home appears to be much more precarious. Many think such a dramatic juxtaposition is due to Alexa and Google Assistant having already established a beachhead in the home. I'm not so sure about that. My suspicion is that HomePod is facing three different issues that make a challenging environment:
- Communal experiences. Voice-controlled smart speakers positioned in a common area aren't personal gadgets like iPhones and AirPods. It's not realistic to assume a family of four will have four different smart speakers catering to each member of the family. Apple's approach to this situation with HomePod appears to be to initially assume the device is for one user and then give that user the option to turn the device into more of a family music speaker that anyone can use to consume music. However, there are questions as to whether that can truly provide a superior music listening experience.
- Not just about music. While music has underpinned Apple's sound on the go strategy, iPhone and AirPods are used for more than music consumption. It's a stretch to say the same thing applies to the first iteration of HomePod with which music consumption is the primary use case. This may change down the road as Apple brings additional features to HomePod, but it's not clear if anything would replace music consumption as the speaker's primary use case.
- Competing against nonconsumption. With HomePod, Apple's most intense competitor ends up being nonconsumption, or the lack of high-quality speakers in the home. Up to now, most people haven't seen the need for or appeal of high-quality sound in the home. The high-end speaker market is niche. Apple is trying to change that and thinks a broader focus on the music listening experience is the answer.
While there are differences between Apple's sound on the go and sound in the home strategies for controlling sound, both share a common trait. Ultimately, each is about delivering experiences. In terms of sound on the go, Apple will likely look to deeply integrate Apple Music into its growing wearables lineup. In addition to delivering music, these wearables products will also serve as a conduit for delivering a digital voice assistant to the user. For sound in the home, Apple believes a use case for a stationary speaker that will likely still be around 5, 10, and even 15 years from now is music consumption. There is a long runway found with HomePod and the ability to reimagine sound to deliver a better music experience.
Receive my analysis and perspective on Apple throughout the week via exclusive daily updates. To sign up, visit the subscription page.
Die meisten Menschen machen sich wenig Gedanken darüber, wie sie in Telefonkonferenzen auf der anderen Seite ankommen. Und das hängt ganz wesentlich vom Mikrofon ab. Also habe ich mal einen kleinen Test gemacht. Anlass war ein Austausch mit unserem Videohannes, der mir erklärt hat, wie er Schaltungen mit Leuten macht, die nicht im Studio sind. (Hier nicht wichtig, aber er nimmt Skype.)
Dies ist eine Aufnahme mit Voice Memos auf dem iPhone. In der Reihenfolge mit folgenden Headsets: Appe EarPods, Apple AirPods, Plantronics Voyager 3200/Edge, Plantronics Voyager 5200, und wieder Apple EarPods.
Überraschend für viele sicher, wie schlecht die AirPods klingen. Zum Hören sind sie wirklich angenehm, aber zum Aufnehmen taugen sie weniger als alle anderen Alternativen. Und ich denke, man erkennt auch, was man mit dem Voyager 5200 gewinnt, auch wenn das etwas klobiger wirkt.
Was hier gar nicht berücksichtigt ist: Die Plantronics unterdrücken Umgebungsgeräusche zuverlässig. Man hört beim 5200 nicht einmal den Fahrtwind, der beim Radfahren das Mikrofon anpustet. Da wären die EarPods völlig verloren.
vowe's choice: Wenn es drahtlos und gut sein soll, vor allem in lauter Umgebung: Voyager 5200 UC. Ohne UC ist das dreißig Euro billiger, hat aber weder die Ladeschale noch den USB-Adapter für PC oder Mac. Beides will man aber haben. Alternativ und genauso guter Klang: Voyager Focus.
Neural networks can feel like a black box, because, well, for most people they are. Supply input and a computer spits out results. The trouble with not understanding what goes on under the hood is that it’s hard to improve on what we know. It’s also a problem when someone uses the tech for malicious purposes, as people are prone to do.
So, folks from Google Brain break down the structures of what makes these things work.
A few years ago, Ray Corrigan suggested to me that the “surveillance state” would most likely be manifest through the everyday mundane actions of petty bureaucrats and administrators, as well as the occasional bitter petty-Hitler having a bad day.
Ray has a systems background, and systems theories were one of my early passions (I started reading General Systems Theory books whilst still at school), so I suspect we both tend to see things from a variety of systems perspectives.
This means that we see the everyday drip, drip, drip of new technologies, whether electronic, digital or legal (because I take legal and contractual systems to be technologies too) not as independent initiatives but as potential components of a wider system that allows these apparently innocuous separate components to be combined or composed together to create something new.
(Inventing a new thing in and of itself is not that interesting. There are many more opportunities for combining things that already exist in new ways than there are for inventing new things”out of nowhere”.)
For example, today a news story has been doing the rounds repeating claims that the DWP are requesting video surveillance materials from leisure centres as part of fraud investigations [DWP: No denial of blanket harvesting of CCTV or Fake-Friending of Disabled People]. It is clear that the DWP do make use of video surveillance footage (just do a news search on disability DWP video) but not the extent to which they do or are legally allowed to request footage from third parties.
The story reminded me about some digging I did a few years ago around DWP using the opportunity of police roadside traffic checks for fraud trawling [All I Did Was Go to a Carol Service…]. It’s maybe worth revisiting that, and updating it with examples relating to requests for video surveillance footage.
What’s notable, perhaps, is the way in which laws are written that afford enforcers a legal basis for some activity that, when the law is originally written, is apparently innocuous, and only offensive to creeped out paranoid science fiction activists imagining what the application of the law might mean, whilst at the same time lighting up the eyes of techies who see the law as an enabler of a particular, technologically mediated action that could be “really cool”.
“Cool, we could use smartphone fingerprint readers for handheld fingerprint checks anywhere and it’d be really cool…” Etc.
Which is not totally science fiction. Because if you recall, UK police started using handheld fingerprint readers for identification purposes as part of “everyday” stop and search activities earlier this year [Police in West Yorkshire can now use mobile phones to check your fingerprints on the spot] . You can read a Home Office press release about the trial here: Police trial new Home Office mobile fingerprint technology.
Amidst the five minute flurry of reactionary “this is not acceptable behaviour” tweets, Andrew Keogh pointed out that [t]his isn’t new law, it has been on the Statute Book for some time, arising from an update to the Police and Criminal Evidence Act, 1984 (PACE) by the Serious Organised Crime and Police Act 2005. My naive reading of that amendment is that the phrases “A constable may take a person’s fingerprints without the appropriate consent” and “Fingerprints taken … may be checked against other fingerprints to which the person seeking to check has access” put in place the legal basis for procedures that are perhaps only now becoming possible through the widespread availability of handheld, networked devices.
One wonders how much other law is already in-place that a creative administrator or bureaucrat might pick up on and “reimagine” using some newly available technology.
At the same time that legislation might incorporate “sleeper” clauses that can be powerfully invoked or enabled through the availability of new technologies, the law also fails to keep up with the behaviour of the large web companies (that were once small web companies).
I was never convinced by Google’s creative interpretations of copyright law on the one hand, or its “we’re a platform not a media company” mantra (often combined with US constitutional “free speech” get out clauses). It will be interesting to see which one they deploy against the AgeID requirements for viewing online pornography in the UK which is to be regulated by the BBFC (background: DCMS Age Verification for pornographic material online impact assessment, 2016 as well as the current UK gov Internet Safety Strategy activity). Because Google and Bing are, of course, pr0n search engines with content preview availability.
On the topic of identity creep, another government initiative announced some time ago (Voter ID pilot to launch in local elections [Sept 2017] had a more formal announcement earlier this week: Voter ID pilot schemes – May, 2018. For a reaction that summarises my own feelings, see eg James Ball, Electoral fraud is incredibly rare. The Tories’ ID trial is an unsavoury attempt at voter suppression. When you take the time out to go and vote, just remember to take your ID with you. And if you need to get some ID, make sure you do it well in advance of the election day on which you’ll need it.
As with many technologically led, “digital first” initiatives, an anti-pattern approach loved by certain public bodies as a way of making services inaccessible to those most likely to either require them or benefit from them, I can’t help get the feeling that technology is increasingly seen by those who know how to use it as an instrument for abuse, rather than as an enabler. Which really sucks.
Nice post from Bud Hunt on his experiences during a student protest against guns. "It’s easy for me to forget, in the humdrum of spreadsheets and TPS reports and invoices and administrivia, that there are real fights to fight, and real struggles of the head and heart and hands that require my, and many others’ attention," he writes. "I hope you’re finding ways, even really little ways, to practice being brave and strong and true. Not only will they make the world a better place – but they’ll make you stronger for the next struggle that’ll need the best you can bring."Web: [Direct Link] [This Post]
This week, as part of the Serbian open data week, I participated in a panel discussion, talking about international developments and experiences. A first round of comments was about general open data developments, the second round was focused on how all of that plays out on the level of local governments. This is one part of a multi-posting overview of my speaking notes.
Local open data may need national data coordination
To use local open data effectively it may well mean that specific types of local data need to be available for an entire country or at least a region. Where e.g. real time parking data is useful even if it exists just for one city, for other data the interest lies in being able to make comparisons. Local spending data is much more interesting if you can compare with similar sized cities, or across all local communities. Similarly public transport data gains in usefulness if it also shows the connection with regional or national public transport. For other topics like performance metrics, maintenance, quality of public service this is true as well.
This is why in the Netherlands you see various regional initiatives where local governments join forces to provide data across a wider geographic area. In Fryslan the province, capital city of the province and the regional archive collaborate on providing one data platform, and are inviting other local governments to join. Similarly in Utrecht, North-Holland and Flevoland regional and local authorities have been collaborating in their open data efforts. For certain types of data, e.g. the real estate valuations that are used to decide local taxes, the data is combined into a national platform.
Seen from a developer’s perspective this is often true as well: if I want to build a city app that incorporates many different topics and thus data, local data is fine on its own. If I want to build something that is topic specific, e.g. finding the nearest playground, or the quality of local schools, then being able to scale it to national level may well be needed to make the application a viable proposition, regardless of the fact that the users of such an application are all only interested in one locality.
A different way of this national-local interaction is also visible. Several local governments are providing local subsets of national data sets on their own platforms, so it can be found and adopted more easily by locally interested stakeholders. An example would be for a local government to take the subset of the Dutch national address and buildings database, pertaining to their own jurisdiction only. This large data source is already open and contains addresses, and also the exact shapes of all buildings. This is likely to be very useful on a local level, and by providing a ready-to-use local subset local government saves potential local users the effort of finding their way in the enormous national data source. In that way they make local re-use more likely.
This week, as part of the Serbian open data week, I participated in a panel discussion, talking about international developments and experiences. A first round of comments was about general open data developments, the second round was focused on how all of that plays out on the level of local governments. This is one part of a multi-posting overview of my speaking notes.
Citizen generated data and sensors in public space
As local governments are responsible for our immediate living environment, they are also the ones most confronted with the rise in citizen generated data, and the increase in the use of sensors in our surroundings.
Where citizens generate data this can be both a clash as well as an addition to professional work with data.
A clash in the sense that citizen measurements may provide a counter argument to government positions. That the handful of sensors a local government might employ show that noise levels are within regulations, does not necessarily mean that people don’t subjectively or objectively experience it quite differently and bring the data to support their arguments.
An addition in the sense that sometimes authorities cannot measure something within accepted professional standards. The Dutch institute for environment and the Dutch meteo-office don’t measure temperatures in cities because there is no way to calibrate them (as too many factors, like heat radiance of buildings are in play). When citizens measure those temperatures and there’s a large enough number of those sensors, then trends and patterns in those measurements are however of interest to those government institutions. The exact individual measurements are still of uncertain quality, but the relative shifts are a new layer of insight. With the decreasing prices of sensors and hardware needed to collect data there will be more topics for which citizen generated data will come into existence. The Measure Your City project in my home town, for which I have an Arduino-based sensor kit in my garden is an example.
There’s a lot of potential for valuable usage of sensor data in our immediate living environment, whether citizen generated or by corporations or local government. It does mean though that local governments need to become much more aware than currently of the (unintended) consequences these projects may have. Local government needs to be extremely clear on their own different roles in this context. They are the rule-setter, the one to safeguard our public spaces, the instigator or user, and any or all of those at the same time. It needs an acute awareness of how to translate that into the way local government enters into contracts, sets limits, collaborates, and provides transparency about what exactly is happening in our shared public spaces. A recent article in the Guardian on the ‘living laboratories’ using sensor data in Dutch cities such as Utrecht, Eindhoven, Enschede and Assen shines a clear light on the type of ethical, legal and technical awareness needed. My company has recently created a design and thinking tool (in Dutch) for local governments to balance these various roles and responsibilities. This ties back to my previous point of local governments not being data professionals, and is a lack of expertise that needs to addressed.
2017 was a great year for Mozilla. From new and revitalized product releases across our expanding portfolio to significant progress in advocating for and advancing the open web with new capabilities and approaches, to ramping up support for our allies in the broader community, to establishing new strategic partnerships with global search providers — we now have a much stronger foundation from which we can grow our impact in the world.
Building on this momentum, we are making two important changes to our leadership team to ensure we’re positioned for even greater impact in the years to come. I’m pleased to announce that Denelle Dixon has been promoted to Chief Operating Officer and Mark Mayo has been promoted to Chief Product Officer.
As Chief Operating Officer, Denelle will be responsible for our overall operating business leading the strategic and operational teams that work across Mozilla to ensure we’re scaling our impact as a robust open source organization. Aligning these groups under Denelle’s leadership will ensure a holistic approach to business growth, development and operating efficiency by integrating the best of commercial and open innovation practices across all that we do.
As Chief Product Officer, Mark will oversee existing and new product development as we deepen and expand our product portfolio. In his new role, Mark will oversee Firefox, Pocket, and our Emerging Markets teams. Having all our product groups in one organization means we can more effectively execute against a single, clear vision and roadmap to ultimately give people more agency in every part of their connected lives.
Our mission is more important and urgent than ever, our goals are ambitious and I’m confident that together we will achieve them.
This was composed by Chris Smither (no, I’d never heard of him either) but was a hit for, and is now sort of a trademark of, Bonnie Raitt. Bonnie’s recorded a lot of good music over the years, but the thing with her is you need to see her play live, it’s at another level entirely.
Bonnie is a tuneful, convincing, loud blues singer (listen to Love Me Like a Man, for example), and is also a dazzling guitarist. In particular she has this gritty, textured slide-guitar tone that absolutely nobody else gets. This was brought home to me one time when I was at a concert and the opening act was led by her band’s keyboard player, and they weren’t bad at all, funk and soul and energy. But Bonnie came on to play their last tune, just played along on guitar in the band, and all of a sudden it was like they turned the volume up to eleven, there was this huge growling beast lurking in the center of the band’s sound.
But on Love Me Like A Man, she normally drops the slide and fingerpicks her breaks, which don’t have the raw intensity but are really just awfully graceful.
I have another connection with Ms Raitt. In 1989 I wrote a “benchmark”, a computer program which exists to test how fast computers can do a particular task — in this case, reading and writing data to and from files on disk. Because this kind of program exists to find bottlenecks in computer systems, and because Bonnie is so great with a bottleneck guitar, I called the program Bonnie, and people are still running it.
Now as for the words… are we into dangerous territory here? There is a breed of male songwriters who might have gotten themselves deep into the sexist weeds starting with a title like this. (And in fact, the original title was “Love Her Like a Man”). No, I don’t believe so at all; but give it a listen and see what you think.
Spotify playlist. This tune on Spotify, iTunes, Amazon. Now for a live-video treat: First, very young at Austin City Limits, then a much more grown-up version; you wouldn’t want to miss either. Listen to those guitar breaks!
Purism Partners with Cryptography Pioneer Werner Koch to Create a New Encrypted Communication Standard for Security-Focused Devices
Koch’s GnuPG and Smartcard encryption innovations popularized by Edward Snowden to be implemented in Purism’s Librem 5 smartphone and Librem laptop devices.
SAN FRANCISCO, California — March 8th, 2018 — Purism, maker of security-focused laptops has announced today that they have joined forces with leading cryptography pioneer, Werner Koch, to integrate hardware encryption into the company’s Librem laptops and forthcoming Librem 5 phone. By manufacturing hardware with its own software and services, Purism will include cryptography by default pushing the industry forward with unprecedented protection for end-user devices.
Adding to the implementation and delivery of Trammel Hudson’s Heads security firmware and partnership with Nextcloud for end-to-end encrypted storage, Purism will now leverage the GnuPG (GPG) and SmartCard designs that Werner Koch has been involved in for over a decade, to include encryption by default into its hardware, software, and services. Librem devices will also include Werner’s GPG encryption, that Edward Snowden famously used to communicate with journalists, by default into communications such as email and messaging through a new process called Web Key Directory, which a distributed system that allows users to select recipient permissions on communication that will be encrypted.
“Purism’s goal of easy-to-use cryptography built into its products is the ideal approach to gain mass adoption—Purism is manufacturing modern hardware designed to allow the users to have control of their own systems.” said Werner Koch, GPG creator and leading cryptography expert. “Leveraging cryptography in hardware, software, and services together ensures the best approach to security, and I’m excited to help advance that with Purism.”
“We are very excited about the partnership between Purism and Werner Koch, especially being able to work closely with a foremost expert with regard to all things cryptography,” said Todd Weaver, CEO of Purism. “We continue to advance solutions aligned with our goals of privacy and security for everyone and this union adds to our growing roster of partnerships that represent our ideal approach to protecting users by default, without sacrificing convenience or usability.”
With the help of Koch, Purism will also utilize secure storage with full-disk-encryption, and file encryption. The final implementation will allow the user or business to retain control by holding the keys to protect their own digital files or data.
About the Purism Librem
Purism’s goal with the Librem products is to offer a high-end computers and phones with a simple, out-of-the-box, secure computing environment usable by ordinary users.
Purism’s Librem laptops feature customized components designed to increase security and freedom for end-users, including coreboot, a neutralized Intel Management Engine, Intel AMT avoidance, and many other features that make this partnership possible on a technical level.
About Werner Koch
Werner Koch became interested in software development in the late seventies and since then worked on systems ranging from CP/M systems to mainframes, languages from assembler to Smalltalk and applications from drivers to financial analysis systems. He is widely known as the principal author of the GNU Privacy Guard (GnuPG or GPG), a project he started in 1997 to foster the use of encrypted communication. Living in Europe, he was not affected by the then active U.S. export restrictions on cryptography software and thus able to make GnuPG available for everyone as Free Software. A large online community of users, developers and donors are extensively helping to maintain and extend GnuPG.
About PurismPurism is a Social Purpose Corporation devoted to bringing security, privacy, software freedom, and digital independence to everyone’s personal computing experience. With operations based in San Francisco (California) and around the world, Purism manufactures premium-quality laptops, tablets and phones, creating beautiful and powerful devices meant to protect users’ digital lives without requiring a compromise on ease of use. Purism designs and assembles its hardware in the United States, carefully selecting internationally sourced components to be privacy-respecting and fully Free-Software-compliant. Security and privacy-centric features come built-in with every product Purism makes, making security and privacy the simpler, logical choice for individuals and businesses.
Media ContactMarie Williams, Coderella / Purism
See also the Purism press room for additional tools and announcements.
Less than four years ago, Magda Jadach was convinced that programming wasn’t for girls. On International Women’s Day, she tells us how she discovered that it definitely is, and how she embarked on the new career that has brought her to Raspberry Pi as a software developer.
“Coding is for boys”, “in order to be a developer you have to be some kind of super-human”, and “it’s too late to learn how to code” – none of these three things is true, and I am going to prove that to you in this post. By doing this I hope to help some people to get involved in the tech industry and digital making. Programming is for anyone who loves to create and loves to improve themselves.
In the summer of 2014, I started the journey towards learning how to code. I attended my first coding workshop at the recommendation of my boyfriend, who had constantly told me about the skill and how great it was to learn. I was convinced that, at 28 years old, I was already too old to learn. I didn’t have a technical background, I was under the impression that “coding is for boys”, and I lacked the superpowers I was sure I needed. I decided to go to the workshop only to prove him wrong.
Later on, I realised that coding is a skill like any other. You can compare it to learning any language: there’s grammar, vocabulary, and other rules to acquire.
See posts, photos and more on Facebook.
Alien message in console
To my surprise, the workshop was completely inspiring. Within six hours I was able to create my first web page. It was a really simple page with a few cats, some colours, and ‘Hello world’ text. This was a few years ago, but I still remember when I first clicked “view source” to inspect the page. It looked like some strange alien message, as if I’d somehow broken the computer.
At times, I felt very isolated. Was I the only girl learning to code? I wasn’t aware of many female role models until I started going to more workshops. I met a lot of great female developers, and thanks to their support and help, I kept coding.
Another struggle I faced was the language barrier. I am not a native speaker of English, and diving into English technical documentation wasn’t easy. The learning curve is daunting in the beginning, but it’s completely normal to feel uncomfortable and to think that you’re really bad at coding. Don’t let this bring you down. Everyone thinks this from time to time.
Play with Raspberry Pi and quit your job
I kept on improving my skills, and my interest in developing grew. However, I had no idea that I could do this for a living; I simply enjoyed coding. Since I had a day job as a journalist, I was learning in the evenings and during the weekends.
I spent long hours playing with a Raspberry Pi and setting up so many different projects to help me understand how the internet and computers work, and get to grips with the basics of electronics. I built my first ever robot buggy, retro game console, and light switch. For the first time in my life, I had a soldering iron in my hand. Day after day I become more obsessed with digital making.
solderingiron Where have you been all my life? Weekend with #raspberrypi + @pimoroni + @Pololu + #solder = best time! #electricity
One day I realised that I couldn’t wait to finish my job and go home to finish some project that I was working on at the time. It was then that I decided to hand over my resignation letter and dive deep into coding.
For the next few months I completely devoted my time to learning new skills and preparing myself for my new career path.
I went for an interview and got my first ever coding internship. Two years, hundreds of lines of code, and thousands of hours spent in front of my computer later, I have landed my dream job at the Raspberry Pi Foundation as a software developer, which proves that dreams come true.
Discover & share this Animated GIF with everyone you know. GIPHY is how you search, share, discover, and create GIFs.
Where to start?
I recommend starting with HTML & CSS – the same path that I chose. It is a relatively straightforward introduction to web development. You can follow my advice or choose a different approach. There is no “right” or “best” way to learn.
Below is a collection of free coding resources, both from Raspberry Pi and from elsewhere, that I think are useful for beginners to know about. There are other tools that you are going to want in your developer toolbox aside from HTML.
- HTML and CSS are languages for describing, structuring, and styling web pages
- Raspberry Pi (obviously!) and our online learning projects
- Scratch is a graphical programming language that lets you drag and combine code blocks to make a range of programs. It’s a good starting point
- Git is version control software that helps you to work on your own projects and collaborate with other developers
- Once you’ve got started, you will need a code editor. Sublime Text or Atom are great options for starting out
Coding gives you so much new inspiration, you learn new stuff constantly, and you meet so many amazing people who are willing to help you develop your skills. You can volunteer to help at a Code Club or CoderDojo to increase your exposure to code, or attend a Raspberry Jam to meet other like-minded makers and start your own journey towards becoming a developer.
Robin Chase of osmosys.org joins the discussion (debate?) on the impacts of AVs – autonomous vehicles. Her perspective is binary:
Self-driving cars are coming! Will their future deliver us a transportation heaven, or hellacious cities? How they impact labor, energy, land use, and tax revenue is in our hands.
Mozilla is kicking off a new experiment for International Women’s Day, looking at ways to make open source software projects friendlier to women and racial minorities. Its first target? The code review process.
The experiment has two parts: there’s an effort to build an extension for Firefox that gives programmers a way to anonymize pull requests, so reviewers will see the code itself, but not necessarily the identity of the person who wrote it. The second part is gathering data about how sites like Bugzilla and GitHub work, to see how “blind reviews” might fit into established workflows.
The idea behind the experiment is a simple one: If the identity of a coder is shielded, there’s less opportunity for unconscious gender or racial bias to creep into decision-making processes. It’s similar to an experiment that began the 1970s, when U.S. symphonies began using blind auditions to hire musicians. Instead of hand-picking known proteges, juries listened to candidates playing behind a screen. That change gave women an edge: They were 50 percent more likely to make it past the first audition if their gender wasn’t known. Over the decades, women gained ground, going from 10% representation in orchestras to 35 percent in the 1990s.
Mozilla is hoping to use a similar mechanism – anonymity – to make the code review process more egalitarian, especially in open source projects that rely on volunteers. Female programmers are underrepresented in the tech industry overall, and much less likely to participate in open source projects. Women account for 22 percent of computer programmers working in the U.S, but only 11 percent of them contribute to open source projects. A 2016 study of more than 50 GitHub repositories revealed that, in fact, women’s pull requests were approved more often than their male counterparts – nearly 3% more often. However, if their gender was known, female coders were .8% less likely to have their code accepted.
What’s going on? There are two possible answers. One is that people have an unconscious bias against women who write code. If that’s the case, there’s a test you can take to find out: Do I have trouble associating women with scientific and technical roles?
Then there is a darker interpretation: that men are acting deliberately to keep computer programming a boy’s club, rather than accepting high-quality input from women, racial minorities, transgender individuals, and economically underprivileged folks.
A Commitment to Diversity
What does it mean to be inclusive and welcoming to female software engineers? It means, first of all, taking stock of what kind of people we think will do the best job creating software.
“When we talk about diversity and inclusion, it helps to understand the “default persona” that we’re dealing with,” said Emma Humphries, an engineering program manager and bugmaster at Mozilla. “We think of a typical software programmer as a white male with a college education and full-time job that affords him the opportunity to do open source work, either as a hobby or as part of a job that directly supports open source projects.”
This default group comes with a lot of assumptions, Humphries said. They have access to high-bandwidth internet and computers that can run a compiler and development tools, as opposed to a smartphone or a Chromebook. “When we talk about including people outside of this idealized group, we get pushback based on those assumptions,” she said.
For decades, white men have dominated the ranks of software developers in the U.S. But that’s starting to change. The question is, how can we deal with biases that have been years in the making?
Inventing a Solution
Mozilla’s Don Marti, a strategist for Mozilla’s Open Innovation group, decided to take on the challenge. Marti’s hypothesis was: If I don’t know who requested the code review, then I won’t have any preconceived notions about how good or bad the code might be. Marti recruited Tomislav Jovanovic, a ten-year veteran of Mozilla’s open source projects, to create a blinding mechanism for code repositories like GitHub. That way, reviewers can’t see the gender, location, user name, icon, or avatar associated with a particular code submission.
Jovanovic was eager to contribute. “I have been following tech industry diversity efforts for a long time, so the idea of using a browser extension to help with that seemed intriguing,” he said. “Even if we are explicitly trying to be fair, most of us still have some unconscious bias that may influence our reviews based on the author’s perceived gender, race, and/or authority.”
Bias goes the other way as well, in that reviewers might be less critical of work by their peers and colleagues. “Our mind often tricks us into skimming code submitted by known and trusted contributors,” Jovanovic said. “So hiding their identities can lead to more thorough reviews, and ultimately better code overall.”
Test and Measure
An early prototype of a Firefox Quantum add-on can redact the identity of a review requestor on Bugzilla and the Pull Request author on GitHub. It also provides the ability to uncover that identity, if you prefer to get a first look at code without author info, then greet a new contributor or refer to a familiar contributor by name in your comments. Early users can also flag the final review as performed in “blind mode”, helping gather information about who is getting their code accepted and measuring how long the process takes.
Jovanovic is also gathering user input about what types of reviews could be blind by default and how to use a browser extension to streamline common workflows in GitHub. It’s still early days, but so far, feedback on the tests has been overwhelmingly positive.
Having a tool that can protect coders, no matter who they are, is a great first step to building a meritocracy in a rough-and-tumble programmer culture. In recent years, there have been a number of high-profile cases of harassment at companies like Google, GitHub, Facebook, and others. An even better step would be if companies, projects, and code repositories would adopt blind reviews as a mandatory part of their code review processes.
For folks who are committed to open source software development, the GitHub study was something of a downer. “I thought open source was this great democratizing force in the world,” said Larissa Shapiro, Head of Global Diversity and Inclusion at Mozilla. “But it does seem that there is a pervasive pattern of gender bias in tech, and it’s even worse in the open source culture.”
Small Bias, Big Impact
Bias in any context adds up to a whole lot more than hurt feelings. There are far-reaching consequences to having gender and racial bias in peer reviews of code. For the programmers, completing software projects – including review and approval of their code – is the way to be productive and therefore valued. If a woman is not able to merge her code into a project for whatever reason, it imperils her job.
“In the software world, code review is a primary tool that we use to communicate, to assign value to our work, and to establish the pecking order at work in our industry,” Shapiro said.
Ironically, demand for programming talent is high and expected to go higher. Businesses need programmers to help them build new applications, create and deliver quality content, and offer novel ways to communicate and share experiences online. According to the group Women Who Code, companies could see a massive shortfall of technical talent just two years from now, with as many as a million jobs going unfilled. At 59% of the U.S. workforce, women could help with that shortfall. However, they make up just 30% of workers in the tech industry today, and are leaving it faster than any other sector. So we’re not really heading in the right direction, in terms of encouraging women and other underrepresented groups to take on technical roles.
Maybe a clever bit of browser code can start to turn the tide. At the very least, we should all be invested in making open source more open to all, and accept high-quality contributions, no matter who or where they come from. The upside is there: Eliminate bias. Build better communities. Cultivate talent. Get better code, and complete projects faster. What’s not to like about that?
You can sign up for an email alert when the final version of the Blind Reviews Experiment browser extension becomes available later this year, and we’ll ask for your feedback on how to make the extensions as efficient and effective as possible.
The post Mozilla experiment aims to reduce bias in code reviews appeared first on The Mozilla Blog.
I recently had the opportunity to read Tiffany Farriss' Drupal Association Retrospective. In addition to being the CEO of Palantir.net, Tiffany also served on the Drupal Association Board of Directors for nine years. In her retrospective post, Tiffany shares what the Drupal Association looked like when she joined the board in 2009, and how the Drupal Association continues to grow today.
What I really appreciate about Tiffany's retrospective is that it captures the evolution of the Drupal Association. It's easy to forget how far we've come. What started as a scrappy advisory board, with little to no funding, has matured into a nonprofit that can support and promote the mission of the Drupal project. While there is always work to be done, Tiffany's retrospective is a great testament of our community's progress.
I feel very lucky that the Drupal Association was able to benefit from Tiffany's leadership for nine years; she truly helped shape every aspect of the Drupal Association. I'm proud to have worked with Tiffany; she has been one of the most influential, talented members of our Board, and has been very generous by contributing both time and resources to the project.
David Wiley argues that we shoul revise the definition of 'open educational resources', changing the criteria from requiring 'free access to the resource' to requiring only 'free access to the rights to reuse the resource'. This allows the resource itself to be not available for free, and yet still be an open educational resource. On this model, he suggests, "the public will always have free access to the resource, but not that the public will have free access to every copy of the resource." It's a clever argument but has the unpalatable consequence that a resource might not be available to anyone and yet still, by this definition, be classified as an OER.Web: [Direct Link] [This Post]
After interviewing five professional artists, researching 21 drawing tablets, and testing nine, we’ve found the Wacom Intuos Draw is the best drawing tablet for beginners. The Draw works on Windows and macOS with most popular art programs, and it offers the most precision and control of any tablet under $100. The Intuos Draw’s pen and tablet buttons are among the most customizable—along with the other Wacom tablets we tested—thanks to excellent software.
Update 03/08/2018: Snap has confirmed it will cut 120 positions from its engineering team. You can find the full statement below:
— Alex Heath (@alexeheath) March 8, 2018
Snapchat’s parent company Snap is reportedly planning on laying off around 100 workers from the its engineering department.
People familiar with Snap’s plans told video news service Cheddar in an exclusive interview that the layoffs would affect approximately 10 percent of Snap’s engineering team and would take place at some point within the next week.
While this doesn’t seem like a massive number of people, this is the first time Snap will layoff employees in its engineering department.
Last year, Snap laid off a few dozen people in its hardware divisions. The company also laid off around 22 people in a variety of other teams in January 2018.
According to Cheddar, Snap has slowed its hiring rate by around 60 percent, maintaining a workforce of roughly 3,000 total employees.
The post Snap confirms it is cutting 120 engineering jobs [Update] appeared first on MobileSyrup.
Just minutes after getting my hands on Samsung’s latest flagship S Series smartphone, one fact was abundantly clear — if you weren’t fond of the Galaxy S8 and S8+, then you’re going to have similar feelings about the S9 and S9+.
Over the last few days, I’ve described the South Korean manufacturer’s latest flagship as its iPhone 8.
S8+ on the left and the S9+ on the right.
Here’s why: there’s nothing really wrong with the S9 or S9+. In fact, I’d go so far as to say that Samsung’s latest S series smartphones are the pinnacle of the Android ecosystem, surpassing even the beleaguered Pixel 2 XL in some respects.
It’s just that the S9 isn’t a significant improvement over the S8.
The smartphone features an impressive, versatile camera — complete with a variable aperture — that still shoots slightly over-processed photos, a design that’s a cut above competitors and, as expected, unrivalled build-quality in the Android space.
That’s not to say that there aren’t improvements in Samsung’s latest flagship. Qualcomm’s new Snapdragon 845 (the phone features Samsung’s own Exynos 9810 processor in Europe and other regions) provides an impressive 35 percent improvement in speed. The S9+ also now features 6GB of RAM, an update I found helped the phone run smoother when the device was under heavy load.
The problem is that, from an aesthetic perspective, the S9 looks identical to the S8. Given that Samsung is synonymous with Android in North America, the average consumer often purchases a new device based solely on how it looks.
If the S8 is just as flashy as the S9 at first glance, why would someone opt to purchase the newer, more expensive device, when the S8 is sure to be sold at a significant discount?
And therein lies the problem.
So what’s actually new?
Though most of the S9’s new features are software-related, the smartphone includes a couple of hardware improvements as well.
The most significant change to the S9’s exterior is the new location of the fingerprint sensor. It’s now located below the camera, instead of to the right of the lens like with the S8/S8+ and Note 8. This means it’s no longer necessary to awkwardly claw your hand around the device in order to reach the fingerprint scanner. That said, the S9’s fingerprint scanner is still tiny and I did encounter instances where I found it difficult to locate.
I’m also not completely satisfied with the location of the S9’s fingerprint sensor. While I’m pleased Samsung moved it to a more sensible location, I’d prefer if it were located still farther down the phone, similar to the Pixel 2’s scanner.
Beyond the fingerprint scanner shuffle, there are other improvements, too. The S9 includes 4GB of RAM, while the S9+ has an even more sizeable 6GB of RAM.
S9+ on the left and an iPhone X on the right.
Other new features include improved camera functionality, a revamped camera app and AR Emoji — Samsung’s interesting, but ultimately lacklustre take on Animoji (more on this later).
The firm also claims the S9’s speakers are 40 percent louder than the S8’s. While the S9’s stereo audio quality is an improvement over the S8 in fidelity, I found the phone only slightly louder. The phone also now supports Dolby Atmos virtual surround sound with headphones.
Dolby Atmos is above standard stereo sound, but the jump in quality isn’t as significant as some might expect. Those who watch a lot of video content on their phones — through streaming services like Netflix, for example — will definitely be pleased with this addition though.
On a side note, the S9 features a working FM tuner. This isn’t a feature I have any interest in, but it’s something S Series fans have been asking for a while. Your headphones act as the antenna and the tuner is active even when data is turned off, as long as the S9 hasn’t been set to ‘Airplane Mode.’
Here’s what’s the same
S8+ on the left and the S9+ on the right.
The S9 features the same 18.5:9 aspect ratio, curved sides and glass front and back as its predecessor. The 5.8-inch S9 and 6.2-inch S9+ also include 2960 x 1440 pixel Super AMOLED displays that are identical to the S8’s. While Samsung says the S9’s panel is brighter, I didn’t notice much of a difference when comparing both displays head-to-head. HDR10 compatibility is back, too.
The S9’s screen is just as stellar as the S8’s and this makes sense given the reputation Samsung has built in the display industry over the last few years.
The phone also looks and feels the same as last year’s S Series devices, which is its most significant obstacle. This means that if you liked the S8’s glossy, fingerprint prone design, then you’ll love the Galaxy S9. Delving into the details, the S9 and S9+ are millimetres taller than their predecessors, and also weigh slightly more. Both of these changes likely won’t be noticed by the average user.
Just like last year, the phone includes 64GB of storage, which is expandable up to 400GB via a microSD slot. The S9 also supports 1.2Gbps data transfer speeds, though this is just future-proofing, given that a real network hitting this speed right now is purely theoretical.
Finally, the S9 is IP68 dust- and water-resistant and supports both wired and wireless fast charging, just like the S8.
It’s all about the camera
The camera is the S9 and S9+’s most impressive feature and most significant upgrade.
The rear camera’s lens is capable of switching between f/1.5 (this is even wider than the LG V30’s f/1.6 aperture) and f/2.4 depending on lighting conditions, when set to automatic. The brighter aperture allows 28 percent more light into the sensor when compared to the S8’s f/1.7 aperture, with the f/2.8 aperture preventing overexposure under sunny conditions — for example, on a tropical beach.
Galaxy S9+ photo shot at night in manual mode – 7.9-megapixels, f/2.4, 1/4 shutter speed, 4.3mm ISO100.
Galaxy S9+ photo shot at night in manual mode – 7.9-megapixels, f/1.5, 1/4 shutter speed, 4.3mm ISO100
To be clear, its not possible to switch to f-stops between these numbers like with a DSLR. The S9’s variable aperture is binary and jumps between either f/1.5 or f/2.4. The ability to change apertures when shooting in Auto Mode improved low-light performance significantly. Under extremely bright conditions when shooting with f/2.4, the highlights were toned down a small amount.
The aperture can also be changed when the S9’s revamped camera is set to manual mode (called Pro in the camera app), though the option is buried in the shutter speed section. When I first learned that the S9 would feature a variable aperture, I had high hopes this would allow for mechanical depth-of-field effects. ‘Live Focus’ with the S9’s dual shooter looks great, but still has a heavily processed feel to it.
Unfortunately, while jumping between f/1.5 and f/2.4 does change depth-of-field slightly, the shift isn’t as significant as I expected. This feature isn’t a replacement for a DSLR equipped with a lens capable of a wide aperture.
Speaking of the S9’s camera app, it’s actually usable now — even when set to manual mode. Switching between the app’s various options, including Panorama, Food, Live Focus (which returns from last year’s Note 8) AR Emoji and Super Slow-Mo, is as easy as swiping through its top menu. This can cause issues though. I occasionally found myself accidentally switching between shooting modes because the settings are so easily accessible.
The S9’s 12-megapixel camera sensor also now includes what Samsung is calling second-generation dual-pixel autofocus, allowing for faster and more reliable focusing. In my experience, I found the S9 focuses more quickly than its predecessor, especially when movement is present in the frame.
The most significant change is that low-light photos are less noisy when compared to the S8’s shooter. Samsung says the S9 features an approximately 30 percent improvement in noise reduction. While it’s nearly impossible to verify the accuracy of this claim, I can confirm that the S9’s low-light photos are cleaner when compared to the S8’s. The S9’s camera processes 12 images per shot in order to reduce noise. To be clear, Samsung’s dual-pixel sensor debuted back with the S7, so this feature isn’t exactly new — the company has just improved on that foundation yet again.
Photos shot with the S9 still look heavily processed. This isn’t necessarily a bad thing given that under most circumstances, a photo taken with the S8 looked better than one shot with Apple’s iPhone 8, X or even Google’s Pixel 2 XL. Colour balance issues, lighting problems and even exposure are all processed more heavily behind the scenes with Samsung’s S Series. As a result, this also means photos shot with the S9 don’t look true to life. If you’re the type of person who prefers images that accurately depict natural vision, then Samsung’s S9 definitely isn’t for you.
There’s also a difference now between the S9’s and S9+’s shooter, with the latter device featuring a second rear 12-megapixel telephoto lens with a fixed f/2.4 aperture, allowing for 2x optical zoom and live focus. Both features function identically to how they did with the Note 8.
It’s also worth noting both rear cameras feature optical image stabilization with the S9+, which makes it easier to snap sharp photos under less than ideal conditions.
Super Slow-mo is amazing, yet finicky
The S9’s 960 frames per second (fps) ‘Super Slow-mo’ feature is incredible when it actually works, giving users the ability to create easy-to-share gifs that can be manipulated in a variety of ways, including even setting them as the smartphone’s wallpapers and adding music.
Super Slow-mo comes in two forms: Auto and Manual. Automatic mode places a square box on the display that can be shifted in size and location, which in theory should make slow-motion recording process easier. When movement passes through the box, a clip is recorded. While this might sound simple, recording slow-motion videos in automatic mode feels unnecessarily finicky.
First, the object needs to travel past the lens exactly where the box is located on the display. Then, the feature also needs to trigger, which didn’t always happen in my experience.
In a number of instances, the S9 either didn’t detect motion or detected the wrong motion, resulting in Slow-mo not triggering properly. If you’ve set up a complicated slow-motion shot, this inconsistency can lead to needless frustration.
Manual Slow-mo gives more control to the user by placing a small button in the right corner of the display that needs to be pressed in order to activate slow-mo mode. This also adds more room for error as well. If you happen to miss the moment you want to capture, or if you’re even milliseconds off tapping the Slow-mo activation tab, your chance is gone.
All sl0w-motion video is only shot in 720p. While this isn’t an issue, it would have been great to see the S9 make the jump to 1080p slow-motion video.
I’ve had a lot of fun with the S9’s ‘Slow-mo’ mode and it is a significant step above the S8’s 240fps slow-motion shooting option.
Hopefully upcoming software updates will iron out some of the automatic feature’s most significant issues.
Samsung’s AR Emoji lag behind Apple’s Animoji (mostly)
AR Emoji are weird and off-putting, but at times strangely endearing.
Acting as the S9’s answer to Apple’s surprisingly entertaining Animoji (one of my favourite iPhone X features), Samsung’s AR Emoji have stiff competition.
Instead of instant, on the fly facial recognition, the setup process takes between eight and 12 seconds to scan your face. The resulting cartoon usually resembles whoever the app has scanned, but with exaggerated, often uncomfortable features.
The level of accuracy with AR Emoji is underwhelming when it comes to face-tracking. I found the on-screen animated version myself lagged behind my real-world facial expressions. There were also instances where the experience would glitch in certain ways.
Where AR Emoji become more interesting is when it comes to the other obviously Snapchat-inspired augmented reality creations, including a box-faced blue dog (my personal favourite), a pink cat, a bunny, and of course, all-important sunglasses.
Despite their technical shortcomings (which could be solved through future updates), AR Emoji are an entertaining diversion and could be the start of something new and exciting for Samsung. If you were expecting the underlying tech powering AR Emoji to rival Apple’s Animoji, then you’re going to be sorely disappointed though.
Some questions still remain surrounding the S9’s DeX dock, which is capable of working as a laptop-like trackpad. It’s unclear when the new DeX Pad is coming to Canada and how much it will cost.
The post Samsung Galaxy S9 and Galaxy S9+ Review: Standing firm appeared first on MobileSyrup.
When Google released the first Android P developer preview yesterday afternoon, the Nexus 5X, Nexus 6P and Pixel C were notably missing from the preview’s short list of supported devices.
In a communication with Ars Technica, the company has confirmed it will not update those three devices to Android P when a final release of the operating system is ready later this year.
While most Android fans had already come to terms with the fact that Google was ending support for its last two Nexus devices, some still held out hope that the company would update the Nexus 5X and Nexus 6P.
When Google launched the two smartphones in late 2015, it promised two years of software support and three years of security updates. Based on that promise, the company wasn’t obligated to update the Nexus 5X and Nexus 6P to Android 8.1, and yet Google still updated both smartphones to the latest minor release of Android Oreo.
Despite those deviations from its stated update policy, Google is now officially saying goodbye to both the Nexus 5X and Nexus 6P.
Additionally, with Google dropping support for the Pixel C, the company no longer officially supports a first-party tablet.
Google will provide the Pixel 2 and Pixel 2 XL with three years of software support. While that’s an improvement over what the company delivered with the Nexus 5X and Nexus 6P, it still pales in comparison to Apple’s update policy. With iOS 11, Apple is still supporting the almost five-year-old iPhone 5S.
Source: Ars Technica
The post Google won’t update Nexus 5X, Nexus 6P and Pixel C to Android P appeared first on MobileSyrup.
Microsoft confirmed yesterday that its lightweight, education-focused Windows 10 S operating system will soon be turned into a mode that can be switched on and off, instead of a full operating system.
In addition to yesterday’s announcement, Joe Belfiore, vice president of Microsoft, has now revealed that it will soon be free to switch out of Windows 10 S mode, regardless of which Windows 10 version you’re using.
Windows 10 S mode will arrive with the next update to Windows 10. Microsoft expects that Windows 10 S-mode enabled devices will start shipping in the coming months from its partners.
Microsoft announced Windows 10 S in 2017 as a lightweight version of its Windows 10 operating system, designed to compete with Google’s Chrome OS.
Source: Microsoft blog
The post You’ll soon be able to upgrade from Windows 10 S for free appeared first on MobileSyrup.
McAfee has acquired Toronto-based VPN company TunnelBear. Terms of the deal weren’t disclosed.
McAfee says that the acquisition allows it to add to its privacy tools for customers. In a release, the company said that combining TunnelBear’s secure network with an intuitive interface helps keep customers’ data secure on public Wi-Fi and web browsing private from advertisers with the ability to block intrusive ads.
“TunnelBear has built an engaging and profitable direct-to-consumer brand, and we’re confident this acquisition will serve both our end users and partners by embedding its best-in-class, hardened network into our Safe Connect product,” said Christopher Young, CEO of McAfee. “This investment is strategic for McAfee’s consumer business as it further showcases our commitment to help keep our customers’ online data and browsing private and more secure at a time when the threat landscape is growing in volume, speed and complexity.”
According to a McAfee survey, 58 percent of respondents know how to check if a Wi-Fi network is safe to use, but less than half take the time to do so.
“McAfee’s acquisition of TunnelBear is an exciting opportunity for our company. TunnelBear will continue to develop the products our customers have come to love, now with the backing and resources of a leading cybersecurity company,” said Ryan Dochuk, co-founder of TunnelBear. “McAfee shares our passion to help everyone browse a more secure and private internet. The acquisition provides us with the resources to develop our service, expand into new regions, and continue leadership of privacy and security practices in the VPN industry.”
This article was originally published on BetaKit
Vancouver-based Mojio has raised an undisclosed amount of funding from Iris Capital and Telus Ventures.
The funding is a top up from Mojio’s $30 million funding round announced in November 2017.
“We are thrilled Iris Capital and Telus Ventures are fueling this stage in our growth,” said Mojio CEO, Kenny Hawk. “After a breakthrough year connecting more than half a million cars to our platform, this strategic funding adds fuel to our tank as we drive toward additional launches with major mobile network operators globally.”
Mojio’s plug-in device allows older vehicles to become connected cars, with analytics delivered to a user’s smartphone. It’s connected over 500,000 vehicles via its portfolio of mobile network operator customers, including Deutsche Telekom, T-Mobile US, and Telus, Bell and Rogers.
Mojio is the first international investment outside of Europe for IrisNext, Iris Capital’s latest $280 million fund supported by Orange, Publicis Groupe, Valeo, BRED, and Bpifrance. The investment is set to fuel Mojio’s growth in Europe.
Mojio is currently powering Telus’ connected car serviced called TELUS Drive+.
“TELUS’ connected car strategy builds upon our successful Internet of Things (IoT) business which includes developing innovative solutions for both the business and consumer markets across Canada,” said Rich Osborn, managing partner at Telus Ventures. “Through venture investments in emerging technology leaders like Mojio, we not only bring better solutions to market more quickly for our customers and drive better connections, but also help propel the Connected Car market in Canada and internationally with complementary telecom carriers around the world.”
This article was originally published on BetaKit
The post Mojio secures $30 million funding from Telus and Iris Capital, plans to expand globally appeared first on MobileSyrup.
Now that Google’s Android P Developer Preview is out in the wild, new features and tweaks to Google’s operating system are starting to be uncovered.
For example, Android Police have uncovered functionality that allows users to connect to up to five Bluetooth devices simultaneously.
Within the Developer Preview, users can navigate to the ‘Developer Options’ menu and locate the option to increase the allowed number of connected Bluetooth audio devices.
With Android O, users can only connect to two Bluetooth audio devices at the same time.
For those with multiple Bluetooth devices, such as a smartwatch, Android Auto, multiple types of Bluetooth headphones and Bluetooth speakers, this update will likely be very useful.
This would save a person from having to continuously switch between devices.
Source: Android Police
The post Android P lets users connect up to five Bluetooth audio devices at once appeared first on MobileSyrup.