Shared posts

22 Mar 01:31

Relational Programming in Mozart/Oz

A video I watched recently on logic programming, A Vision for Relational Programming in miniKanren, by William Byrd gives some interesting examples of relational programming using miniKanren.

miniKanren is an embedded constraint logic programming language designed for writing programs as relations, rather than as functions or procedures. Unlike a function, a miniKanren relation makes no distinction between its inputs and outputs, leading to a variety of fascinating behaviors. For example, an interpreter written as a relation can also perform code synthesis, symbolic execution, and many other tasks.

The video demonstrates how a relational interpreter can be used not just to evaluate expressions but to also generate valid expressions given a result. Other examples of relational programming would be the Prolog examples in my Shen Prolog post where relational functions like member can be used to enumerate all members of a list, as well as the traditional finding a member in a list.

I like to use the Mozart Programming System for exploring logic programming and this post goes through converting some examples in Mozart/Oz. Mozart is an implementation of the Oz programming language. The book "Concepts, Techniques, and Models of Computer Programming" is a good textbook for learning the different programming models that Mozart enables, including the relational model. For the examples here I'm using Mozart 1.3.x. It's an old language and implementation but I like to experiment with languages that are a bit different to mainstream languages.

The Prolog examples following can be tested in the online SWI Prolog implementation, Swish.

Basic example

The following function in Prolog has multiple values that be used as an argument:

foo(one).
foo(two).
foo(three).

Passing either one, two or three succeeds:

foo(one).
true

foo(two).
true

foo(four).
false

Using backtracking it's possible to enumerate all valid arguments:

foo(X).
X = one
X = two
X = three

The function findall can be used to return all values in a list:

findall(X, foo(X), Y).
Y = [one, two, three]

Backtracking in Mozart/Oz is not the default. Mozart provides the choice statement to enable backtracking. A choice statement contains a sequence of clauses separated by [] where a 'choice point' is created for each group of clauses. Each clause is tried in turned - if a particular clause fails then execution backtracks to a previous choice point and is resumed until one succeeds or all fails. The equivalent implementation of foo in Mozart is:

fun {Foo}
   choice
      one
   []
      two
   []
      three
   end
end

Prolog uses an automatic depth first search to find solutions. Mozart doesn't do automatic search - programs involving choice points must be run in a search engine that can implement any form of search required. A default search engine is provided that does depth first search. Here we create a search object and enumerate all the valid results manually:

Y = {New Search.object script(Foo)}
{Browse {Y next($)}
[one]

{Browse {Y next($)}
[two]

{Browse {Y next($)}
[three]

{Browse {Y next($)}
nil

The Mozart syntax can look a little strange at first but it's not too difficult to understand once it's learnt. New instantiates an object. Search.object is the class of the object being created. script(Foo) is a record that is being passed to the constructor of Search.object. The Foo is the function we created previously. The remaining statements call the next method on the object. next takes as an argument a variable that receives the result of the call. The use of $ tells Mozart to pass a temporary variable to receive the result, and return that result as the result of the call. This result is passed to Browse which displays the result in the GUI browser. Results are returned in single element lists and when there are no more results, nil is returned.

There is a library function to return all possible solutions in a list, Search.all:

{Browse {Search.all Foo 1 _ $}}
[one two three]

The ability to interactively drive the search enables writing programs that control the search process. The Explorer is a Mozart tool that uses this ability to show an interactive graph of the search space of a program. It can be run to show a graph of all solutions in Foo with:

{Explorer.all Foo}

Membership testing

In Prolog a function to test membership of a list can be written as:

mem(X, [X|_]).
mem(X, [_|Y]) :- mem(X, Y).

This states that X is a member of the list if X is the head of the list (the first element), or if it is a member of the tail of the list.

mem(2, [1,2,3]).
true.

Thanks to Prolog's backtracking you can also use this function to enumerate all values in the list:

mem(X, [1,2,3]).
1
2
3

In Swish you need to click 'Next' to get each result. You can return all results as a list with findall:

findall(X, mem(X, [1,2,3]), Y).
Y = [1, 2, 3].

This is what the non-backtrackable member function looks like in Mozart:

fun {Mem X Ys}
   case Ys
   of nil then false
   [] Y|Yr then X==Y orelse {Mem X Yr}
   end
end

{Browse {Mem 1 [1 2 3]}}
true

A backtrackable version of this code uses choice instead of case to create choice points in each clause:

proc {Mem X Ys}
   choice
      Ys = X|_
   []
      Yr
   in
      Ys = _|Yr
      {Mem X Yr}
   end
end

Here proc is used instead of fun. A procedure doesn't return a value. It is expected to bind values to arguments to return a result. In this case either X or Ys will be bound depending on what is passed as an argument. Given the call Mem 1 [1 2 3] then the first clause of choice succeeds - the head of the list is equal to X. With the call Mem X [1 2 3] then X will be successively bound to each element of the list depending on what order the search engine used to evaluate it uses. Mem 1 Y will enumerate all possible lists containing 1:

{Browse {Search.all proc {$ L} {Mem 1 [1 2 3]} end 1 _ $}}
[_]

{Browse {Search.all proc {$ L} {Mem L [1 2 3]} end 1 _ $}}
[1 2 3]

Y={New Search.object script(proc {$ L} {Mem 1 L} end)}
{Browse {Y next($)}}
[1|_]

{Browse {Y next($)}}
[_|1|_]

{Browse {Y next($)}}
[_|_|1|_]

The difference here compared to our search over Foo is that an anonymous procedure is passed to the search engine. Foo was aleady a single argument procedure so it didn't require a wrapper. The anonymous procedure takes a single argument which is expected to be bound to the result of a single iteration. In the first example no result is bound, just the fact that the Mem call succeeds is enough. In the second, L is bound to the first argument to Mem resulting in a list of all valid first arguments. In the third, L is bound to the second argument to Mem resulting in a list of all valid lists that contain the element 1. This is infinite so we only iterate the first few solutions.

Syntax checker

The miniKanren video referenced earlier, A Vision for Relational Programming in miniKanren, has an example of a simple language syntax checker implemented in the relational style. The equivalent Mozart implementation is:

proc {LCSyn Term}
   choice
      {IsAtom Term true}
   []
      X T
   in
      Term = lambda(X T)
      {IsAtom X true}
      {LCSyn T}
   []
      E1 E2
   in
      Term = [E1 E2]
      {LCSyn E1}
      {LCSyn E2}
   end
end

A Term is either an atom, a lambda record containing an argument and body, or application of two expressions (here represented as a list). A Term can be tested to see if it is valid with:

{Browse {Search.one.depth
         proc {$ L}
            {LCSyn lambda(foo bar)}
         end
         1 _ $}}

A result of [_] indicates that it succeded (nil would be a failure). Thanks to the magic of relational programming it's possible to enumerate all valid terms:

Y={New Search.object script(LCSyn)}
{Browse {Y next($)}}
[lambda(_ _)]

{Browse {Y next($)}}
[[_ _]]

{Browse {Y next($)}}
[[lambda(_ _) _]]

{Browse {Y next($)}}
[[[_ _] _]]

Reversible Factorial

As a final example, the following program implements a factorial function that can compute the factorial of a number, or the number given its factorial:

proc {Fact ?N ?F}
   proc {Fact1 ?N ?F}
      choice
         N = 0
         F = 1
      []
         N1 F1
      in
         N1::0#FD.sup
         F1::0#FD.sup
         N >: 0
         N1 =: N - 1
         F =: N * F1
         {Fact1 N1 F1}
      end
   end
in
   N::0#FD.sup
   F::0#FD.sup      
   {Fact1 N F}
   {FD.distribute naive [N F]}
end

% Factorial of 5
{Browse {Search.all proc {$ L} {Fact 5 L} end 1 _ $}}
[120]

% What factorial gives the answer 24
{Browse {Search.all proc {$ L} {Fact L 24} end 1 _ $}}
[4]

This uses a few more Mozart features but is essentially a search through the choice points in a choice statement. In this case it also uses Finate Domain Constraints. This allows telling Mozart what the range of a particular integer value can be, provides constraints upon that value and the search process will attempt to find a solution that results in the integer containing a single value. The syntax X::0#5 constrains the variable X to be from zero to five. FD.sup is a constant for an implementation defined maximum upper bound. Operators ending in : impose constraints on values that can be backtracked during search. There's a limitation to this approach in that the finite domain constraint has an implementation defined maximum upper value and the search will fail if it hits this value which unfortunately limits the practicality of a reverse factorial.

Mozart/Oz summary

Mozart/Oz has a lot of other interesting functionality. Distributed objects, GUI programming, Concurrency, amongst others. It's a fun tool to experiment with and Concepts, Techniques, and Models of Computer Programming is an excellent read.

Unfortunately development of Mozart has slowed from its heyday. A reimplementation of Mozart, called Mozart 2 is being worked on that uses LLVM for the code generator and replaces the internal constraint programming system with the Gecode toolkit. Development seems to have stalled recently and it lacks many of the features that already exist in older Mozart versions. Hopefully it'll pick up steam again.

For this reason I continue to use Mozart 1.3.x. There is a 1.4.x version but it has some issues that make me avoid using it. The distributed layer was replaced with an implementation written in C++ which has some bugs that I've been unable to work around in projects where I used it. The distribution panel is broken due to the hooks it requires not being implemented by the new C++ layer. At some point Mozart 2 may be complete enough for me to move to that version. I make the occasional fixes to 1.3.x and 1.4.x to keep it building and running on Linux.

Even though Mozart seems to be in a holding pattern it's a great for exploring ideas and the list of papers is a good resource for learning about distributed programming, constraints and concurrency. Some interesting projects implemented in Mozart include:

22 Mar 01:30

Ethereum JS Ecosystem Updates

by Martin Becze

It’s been a fairly busy for the last couple of months for the Ethereum javascripters. To start with, there was a really great hackathon with IPFS. You can read Dan Finlay’s excellent write up here.

Also, during this time Aaron Davis (Kumavis) made some excellent progress towards a JS light client by utilizing IPFS’s libp2p to build a in-browser mesh network and IPLD to provide the merklization layer. This will be important work in the future for building pure in-browser clients. Also Casey Detrio worked on a standard json RPC test suite, which you can see the results of here.

After the Seattle Meetup, we (Axic and Wanderer) sat down for a week long hackathon in Budapest to hash out some details of ewasm. Elsewhere in JS world, Fabian is doing a huge refactor of Web3.js for the 1.0 release, while Nick Dodson has been busy with ethjs. The rest of this post will be charting the various projections that this technology could provide as well as going into some details about each individual project. All these projects are all open source and encourage community participation, so if you are interested please check them out, say hello and send in a PR if you have time! 

EWASM

Ewasm’s goal is to research and replace the EVM with Webassembly and secondarily, implement a client for the current system which can be efficiently JITed (or transcompiled) to WebAssembly.

A major piece of evaluating WebAssembly for blockchain usage will be create a test network and this year the focus of the Ewasm team will be bringing that test network to life. The testnet work will:

  • enable hands-on work with ewasm for a wider audience
  • enable related work, such as experiments with casper to be done by providing a flexible platform for experimentation

The ewasm track is dedicated to ewasm research and development, while the client integration track will be dedicated to developing the network and bring full and light clients into existence. But there are many shared components to these two tracks. The Ewasm project is being broken down into two main components: the Kernel Layer, which handles IPC and manages the state, and the core VM. This should enable us to use the same framework for different VM implementations.

So to recap, the major tasks for ewasm are:

  • Build an ewasm test network
  • Creating a reusable “kernel” module
  • Revamp ethereumjs-vm
    • Use ewasm-kernel for the message passing
    • Implement the latest EIPs
  • Ewasm integration tools
  • Solidity-ewasm integration (upcoming effort for the solidity hackathon!)

Please come join the implementation effort! We have semi-weekly meetings on Tuesdays. Our communication channel is on Matrix at prima:matrix.org (or #ewasm on IRC or at gitter)

Networking

There are several reasons to have an Ethereum networking implementation in JS. For one, it would allow us to implement a full and light Ethereum JS node. These light clients would run both in a node.js environment and in a browser. A prerequisite for an in-browser light client is “bridge” nodes. These nodes might also act as signaling servers for the webrtc network that the browser light clients would use to relay  messages from the RLPx network to the webrtc network. This work is being spearheaded by Metamask using IPFS’s libp2p. Also the RLPx implementation was recently revamped by fanatid.

IPLD

Ethereum’s blockchain and on-chain state can be understood as a graph of hash-linked data. IPFS/IPLD is proposed as a generic system to describe and distribute hash-linked data. Therefore we can describe Ethereum as an application layer on top of the hash-linked data availability platform. As a proof of concept, Kumavis implemented IPLD resolvers for the Ethereum data formats that define where hash-links are encoded inside the canonical Ethereum formats (e.g. block and state trie node). This, combined with other generic features of libp2p (IPFS’s generic p2p networking stack), enables the creation of minimal Ethereum clients that focus on the consensus protocol and state transition mechanism. One advantage of this approach is that the networking layer is transport-agnostic and can be used in environments that don’t have access to tcp/udp (such as the browser) that the standard Ethereum clients require. This project is still in the research phase. MetaMask hopes to use this approach to implement a browser compatible Ethereum light client via a secondary network, bridged by hybrid nodes.

Web3.js 1.0 incoming!

A new version of web3.js is in the making. It is the biggest refactor of the codebase since the inception of the popular Ethereum library. It will have a lot of convenience features like confirmation and receipt event on transactions, a nice subscription API, and checksum checks on address inputs.

The API is still not yet finalized, but if you are eager to have a look you can check out the docs here.

The new version will also have quite a few breaking changes, but those updates are necessary to get the new API right and remove some some deprecated methods along the way, like synchronous calls. 1.0 will only have promises and in some events “PromiseEvents” to better reflect multiple events on a transaction’s execution. For those who are thinking of transitioning their apps to the new web3, there will be a migration guide upon launch to help make the transition from 0.x.x as easy as possible.

In Mist there will be no web3 exposed by default anymore, as this encourages the bad habit of relying on the Mist-provided web3, which makes breaking changes disastrous for dapps. Instead, there will be an “ethereumProvider”, which libraries like web3 can use to talk to the underlying node. Web3.js will automatically detect any given provider and expose it on its API for easy instantiation.

For those who can’t wait and want to try it right now, checkout the 1.0 branch in the web3.js repo. Be aware there might be dragons!

Ethjs

Ethjs is a new highly optimised, light-weight JS utility for Ethereum geared to working with the json RPC, much like web3.js but lighter, async only and using bn.js. The current ongoing activity includes:

  • Adding the ABI methods for decoding logs in ethjs-abi
  • Having fixed a small decoding bug in ethjs-abi (handling 0x addresses)
  • Merged new schema for personal recover and sign ethjs-schema
  • Looking for help making ethjs-filter stateless (infura ready)
  • Bug fixing in ethjs-contract
  • Documentation updates all around
  • Upcoming ethjs version 0.2.7 release!

TestRPC

Working on the 4.0.0 release! This release will include:

  • Database persistence. Now you can create a test chain and save that data, just like any other private chain!
  • Clean up of how data is stored in memory, which should reduce memory issues significantly. Although there will be a slight cost in some performance, which mostly be unnoticeable unless you’re making thousands of transactions, it will bring a huge increase in stability.
  • Bundling for the browser (provider only).
  • Easier installs on Windows, and possibly other platforms.

We’ll be moving the TestRPC to the Truffle github organization since it’s primarily maintained by Truffle developers. There are significant new TestRPC add-ons coming. And we’re investing significant energy in documentation and branding that unifies it under the Truffle brand. Any feedback on this move is appreciated.  Lastly, the TestRPC needs a new name that exudes everything it can do. If you have an idea let us know!

Community

The Ethereum JS community is an exciting and wonderful thing to be a part of. There are many great projects happening. If you are interested in plug in we have weeklyFriday meetings at 3:00 EST / 10:00 PST / 18:00 UTC. Watch our gitter channel for the chat link. Also, we are organizing an upcoming hackathon. Let us know if you are interested.


ADDENDUM [Mar. 22, 2017]: Note that some of the projects in this post are not directly supported by Ethereum Foundation, but have been included as they are relevant to the overall Ethereum JS ecosystem update by the author.

The post Ethereum JS Ecosystem Updates appeared first on Ethereum Blog.

22 Mar 01:30

What is Literature for?

by Caterina Fake
22 Mar 01:29

You Can Now Manually Save Your Parking Spot in Google Maps

by Rajesh Pandey
Google Now on Android has long been able to automatically remember where you parked, but the feature was unreliable since it made use of the various sensors and the activity recognition system of the OS to detect when you have stopped moving. Now, Google is updating its Maps app to add the option to quickly save a parking spot. To save a parking spot, simply tap on the blue dot icon and select the “save your parking” option. Continue reading →
22 Mar 01:29

Old is New: Why Desktop UX still inspires

by xjgi4k
My conversion from Mac to Windows last fall had an unexpected bonus: by shifting from Mac to Windows conventions, I was forced to rethink many aspects of the desktop experience. As a result, I’m now inspired more by desktop UX concepts than mobile ones. I’m not surprised if some might think this quaint, or worse, […]
22 Mar 01:29

Second Android 7.1.2 beta adds support for ‘Swipe for notification’ gesture to Nexus 6P

by Igor Bonifacic
Back of Google Nexus 6P smartphone

Back at the beginning of February, Google started rolling out Android 7.1.2. Billed as an “incremental maintenance release,” the update didn’t include many new features.

On the Nexus 5X, however, 7.1.2 added a gesture called “Swipe for notifications.” When enabled, the gesture allows Nexus 5X owners to check their notifications by swiping down on their smartphone’s fingerprint sensor.  Before 7.1.2, this ‘move’ was exclusive to the Pixel and Pixel XL.

For whatever reason, Google did not add this feature to the Nexus 6P with the launch of 7.1.2. However, now that the second 7.1.2 beta is out to developers, the company has remedied the situation. Once they’ve updated their phone, Nexus 6P users can enable the gesture by turning on the relevant toggle in the Moves section of the settings menu.

If you’re enrolled in the Android developer preview, check out the gesture and tell us how it works for you.

Source: Android Authority

The post Second Android 7.1.2 beta adds support for ‘Swipe for notification’ gesture to Nexus 6P appeared first on MobileSyrup.

21 Mar 01:20

Subscription service changes and Clips available

by Narrative

Dear Narrative community,

It’s been some time since we were in touch, so let us update you on what we’ve been doing and what is happening.

As those who read our newsletter from November probably know, we’re transitioning the Narrative company and brand to a new organisation and evaluating the relaunch of the Narrative Clip product line to a wider market.

We’re transitioning the Narrative Service to a paid subscription model

The cloud-based Narrative Service stores the photos you grab with your Narrative Clip for easy viewing through the mobile apps. To keep operating the service for you, we need to start charging a small monthly fee. Since we aren’t yet sure how many of you want to keep using the service, we need to start with a fee of around 3.8 USD/month, and then see what we can do to reduce our costs and share any savings with you.

Our aim is to split the subscriptions into two tiers as soon as we have usage statistics, with a casual user plan at a lower cost as well as a cool Pro Lifelogger plan which might have a slightly higher cost. There is currently no free plan, so if you don’t sign up for the subscription, your photos and videos on your account will be deleted and you may continue using your Narrative Clip 1 or 2 by local file downloading.

As a part of this, we are transitioning the stored photos and videos into the new company’s cloud service, and unfortunately this process has to be concluded with only a few days of notice now. If you intend to keep using the Narrative Service with our paid subscription plan, and thus keep your photos and videos stored on our servers, please log into the Narrative mobile app as soon as you can and accept the updated Terms of Service agreement. Doing this will give us a heads up that your photos should be transferred, and is also necessary for us to provide the service to you from the new company.

To reduce the risk of terminating accounts due to missed communication, we are transferring all photos uploaded since the previously announced shutdown date of 1 November 2016. We picked this date since everybody who had uploaded photos before that, has had a chance during October to download their data.

ALL PHOTOS ON OLD ACCOUNTS (THAT HAVE NOT UPLOADED DATA AFTER 1 NOV 2016) AND THAT HAVE NOT ACCEPTED THE LATEST TERMS OF SERVICE AGREEMENT WILL BE DELETED

We will send email with the link to the new Subscription Signup page when it is launched in a few days as well as update you with a post here.

Scheduled downtime and Desktop client updates

While we transition the database server, we need to temporarily shut down the service both for uploads and viewing, to avoid uploaded data being lost during the move. This is scheduled to take place this week of March 20 and might last between 1-3 days.

At the same time, uploading through the oldest PC/Mac uploader programs will be disabled. These used an older upload protocol which is difficult for us to keep maintaining and thus it will be disabled during the move. We see that some of you are still using the old programs, probably the really hardcore Narrative Clip 1 users out there. But even though we respect nostalgia, you need to upgrade to the latest PC/Mac tools if you want to keep using your Clip 1 after next week. The links are at start.getnarrative.com.

We have a limited batch of Narrative Clip 2 for immediate shipping

We are happy to announce the availability of a limited batch of the Narrative Clip 2 in Red, White and Black, new in retail boxes. These will be available for purchase through our online store at getnarrative.com in about a week so please check it out if you are interested. For the purchasers of new Clips, a subscription plan of 3 months will be included. We will post here again as soon as it is up and running.

If you are unsure on what to do, want to give us suggestions or feedback, or want to hang out with other cool Narrative users, please check out the Facebook Narrative Clip Lounge or send an email to support@getnarrative.com.

We do hope that this does not create any inconvenience for any users, and we thank all of you who have helped us towards getting Narrative back on track.

The New Narrative Team.

The post Subscription service changes and Clips available appeared first on Narrative Blog.

21 Mar 01:19

John Gruber on the 2017 iPad Lineup

by Federico Viticci

John Gruber on what may be coming next in terms of iPad refreshes:

What doesn’t make sense to me is a new 10.5-inch model. The idea makes sense — keeping the physical footprint of the current 9.7-inch models but reducing the bezels and putting in a bigger display. The ideal form factor for iPads and iPhones is just a screen, like the phones in Rian Johnson’s Looper — reducing the size of bezels and moving toward edge-to-edge displays is inevitable. Even the pixel density math works out for a 10.5-inch display.

What doesn’t make sense to me is the timing. I don’t see how an iPad with an exciting new design could debut alongside updated versions of the existing 9.7-inch and 12.9-inch iPads. Who would buy the updated 9.7-inch iPad Pro with the traditional bezels if there’s a 10.5-inch model without bezels? No one.

If Apple is going to position both the second-gen 12.9" and 9.7" iPad Pros as the high-end models, I don't see where a simultaneous release of a drastically different 10.5" iPad Pro would fit. But if the second-gen iPad Pros (with the current form factors) move to the low end of the lineup, that means the 10.5" iPad Pro could introduce an edge-to-edge design with no Home button before the iPhone gets such treatment (supposedly) later this year.

That idea always seemed odd to me. Traditionally, the iPad doesn't get major hardware changes before the iPhone. The iPad hardware tends to follow the iPhone. True Tone and the four-speaker system were iPad Pro-first features, but they weren't fundamental platform changes such as Touch ID or Retina. Both of those came to the iPhone first. (I won't even count the Smart Connector here.) An edge-to-edge design with no Home button is a major platform shift – particularly if it includes new developer APIs, which would have to launch in the Spring before iOS 11 if the rumor of an imminent iPad Pro 10.5" is to be believed. At this point, I find that somewhat hard to believe.

Instead, I think spec-bumps across the entire iPad lineup would make more sense in the short term. I can see Apple bringing consistency to the product line (True Tone, USB 3 speeds, and fast charging for every iPad Pro model) and adding faster CPUs/more RAM for powerful iPad-only features coming with iOS 11. I'm curious to see if Apple will revive the iPad mini by making a 7.9" iPad Pro and if iPad accessories will receive substantial improvements at all (it'd be nice to get an upgraded Smart Cover or a Pencil with superior battery life).

→ Source: daringfireball.net

21 Mar 01:18

Oh hallo, liebe Briten, ihr wollt gerne aus der EU ...

mkalus shared this story from Fefes Blog.

Oh hallo, liebe Briten, ihr wollt gerne aus der EU raus? Dann zahlt uns doch erstmal die 60 Milliarden Euro aus, die ihr uns noch schuldet!
21 Mar 01:18

Why Isn’t It Easier To File Your Tax Return For Free? Thank TurboTax, H&R Block

by Chris Morran
mkalus shared this story from Consumerist.

For most people, the IRS now has all the information it needs to estimate how much you owe in taxes, or how much of a refund you are due. So why is the burden on you to tell the federal government this same information? It may have something to do with the millions of dollars that H&R Block, Intuit (maker of TurboTax), and others have spent lobbying to maintain their exclusive arrangement with the IRS.

The IRS Restructuring and Reform Act of 1998 directed the Secretary of the Treasury to come up with a “return-free” tax filing system by 2008. Under such a system, the IRS would take the W-2s, 1099s, and other tax forms it receives to automatically calculate and prepare a rough draft tax return for taxpayers who want it — and for free.

If the taxpayer agreed with what they saw in a return-free filing, they would simply sign it. If there’s a problem or something that needs to be changed, the taxpayer would make those corrections and submit. Taxpayers who wanted to file their own returns would be free to do so; no mandate that everyone goes through this process.

It’s been nearly a decade since that deadline came and went. What happened?

Rather than work toward meeting the 2008 deadline for offering return-free filing, the IRS (at the direction of the Bush administration) instead established the Free File program in 2002, allowing certain tax prep companies the ability to offer free electronic filing software.

Those companies are known as the Free File Alliance, whose members include Intuit and H&R Block. Since 2002, the Alliance has repeatedly extended its exclusivity agreement with the IRS. The seventh and most recent deal [PDF] between the IRS and the Alliance extends their relationship through Oct. 2020.

Even though the tax prep industry and the IRS have this long-term agreement, ProPublica points out that both Intuit and H&R Block have continued to use money in an effort to Congress to stop laws that would open up the door to return-free filing, or to support legislation that would make this IRS relationship permanent.

Of the nearly $2.4 million Intuit spent on lobbying in 2016, about 75% of it involved one particular piece of legislation, the Free File Act of 2016, which sought to lock in the public-private partnership with the Free File Alliance.

H&R Block also spent $1.7 million lobbying in support of this bill, more than half of the $3.26 million it used for lobbying last year.

A large chunk of H&R Block’s lobbying money also went against legislation intended to make sure paid tax preparers are competent.

Block also spent $210,000 trying to defeat Sen. Elizabeth Warren’s attempt to jumpstart the movement toward return-free filing — even though that bill had no chance of getting out of committee, let alone being signed into law.

The Free File Alliance and supporters of legislation like the Free File Act argue that having the government present you with a pre-filled form is a matter over federal overreach, despite the fact that the taxpayer would not be required to agree with the IRS estimate.

Tax law specialist Joseph Bankman of Stanford Law School tells ProPublica that he doesn’t see it that way. Having the government pre-fill the tax return could actually help taxpayers, by compelling the IRS to “show its hand.”

“Now you know what the government knows,” Bankman explains. “If there’s a mistake that goes in your favor, maybe you don’t call attention to it.”

While the Free File Alliance website brags that “70% of American taxpayers” are eligible for Free File, and “98% of users would recommend the program to others,” what the site glosses over is that, according to the IRS, only about 2-3% of eligible taxpayers actually take advantage of Free File.





20 Mar 23:59

What we learned about creative collaboration from artists at SXSW

by Liz Armistead

Last week, we had the pleasure of talking with an inspiring group of musicians, designers, podcasters, and comedians at the 2017 South by Southwest Conference and Festivals in Austin, Texas. Our goal was to find out how some of the most brilliant minds in the world collaborate on their creative work.

One recurring theme was the importance of taking risks and finding ways to step out of your comfort zone. In almost every interview, we heard about creative breakthroughs that happened when they were most willing to let themselves be vulnerable. Each artist had their own unique story to tell about the ways their work improved when they invited others into their process.

Many mentioned the importance of the trust and chemistry in their relationships with their collaborators. Those were factors that made it feel less like work and more like—as Bridgit Mendler put it—a “labor of love.”

For some creative pros, taking risks meant reaching out to people they admired and asking if they wanted to work together. For others, it meant getting past the fear of criticism when they ask for feedback. In other cases, risk taking meant being willing to let go of control, go with the flow, and follow an idea wherever it took them—even when it meant keeping the happy accidents. In this clip, Cindy Wilson of the B-52s tells us how a famous moment in Love Shack” came from an improvisation. 

Another common theme was the way social media served as a connecting point for artists who wanted to work together, even if they’d never met in person. ROZES gave us the inside story of how one collaboration began on Twitter.

In our interview with Sasheer Zamata, we learned that sometimes the funniest sketches came from collaborations with the most serious actors—and other surprisingly multi-talented hosts.

The major takeaway? Technology might be the tool that brings these creative leaders together, but the relationship between the collaborators is the true engine of creation. And the spark that’s created when two or more people bring ideas to the table consistently produces strong, sometimes unexpected, results.

Stay tuned for more creative insights from the SXSW Dropbox Studio. Follow us on Twitter, Facebook, and Instagram to catch more behind-the-scenes looks from our interviews at SXSW.

Grow bigger, brighter ideas with Dropbox Paper. Find out how.

20 Mar 23:59

Superscreen mirrors your smartphone on a 10.1-inch FHD touchscreen display [Sticky or Not?]

by Rose Behar

Most arguments against mobile tablets boil down to two things: too little functionality at too high of a price.

Superscreen, a wildly popular new item on Kickstarter, solves at least one of those problems by mirroring your smartphone to a 10.1-inch 2560 x 1600 FHD touchscreen display for the early bird price of $99 USD (around $130 CAD).

The display is 241.8mm long, 172.5mm wide and 7.8mm thick (0.7mm thicker than the iPhone 7, for reference). Internally, it runs on an unnamed 2GHz quad-core processor, 4GB of RAM and contains a 6,000mAh battery with USB-C charging.

Superscreen’s dual high fidelity speakers kick in when your phone is on mute and can hit 88 decibels, adding another entertainment-based bonus to its overall value.

As for connection, Superscreen says it works through an app downloaded on your smartphone and that its “patent pending technology transfers data between your phone at industry leading speeds as far as 100 feet away regardless of obstructions” — transmitting even your cellular data to the device. How does its patent-pending technology do that? The company doesn’t go further into detail, unfortunately, only adding that the Superscreen is compatible with the vast majority of Android and iOS devices and supports Bluetooth 4.1.

The screen also dabbles in the camera game with a 2-megapixel front-facing and 5-megapixel rear-facing camera — a peculiar choice given that the consumer can’t use the Superscreen without their likely much better equipped smartphone nearby.

Considering all that, could the Superscreen have any chance in the tablet market — or as we at MobileSyrup say: is it sticky?

Verdict: Sticky! (aka thumbs up)

At the very least, the Superscreen could be an inexpensive solution for parents and kids in the car and at most — if it can actually provide a seamless, premium experience — the touchscreen display could become the preferred form for next-generation tablets.

On a personal level, though, I still have an old iPad wasting away in a drawer somewhere that I really should pay some attention to before moving on to the next tablet solution.


Note: This post is part of an ongoing series titled Sticky or Not. Sticky or Not began as a series on MobileSyrup’s Snapchat account in which Rose Behar analyzes new and often bizarre gadgets, rating them sticky (good) or not (bad). Now the series is expanding to include articles, because who doesn’t love a quirky new gadget? Make sure to add MobileSyrup on Snapchat to get quick a hit with Rose’s oh-so candid thoughts before it reaches the site. 

The post Superscreen mirrors your smartphone on a 10.1-inch FHD touchscreen display [Sticky or Not?] appeared first on MobileSyrup.

20 Mar 23:58

The Last Gasp of the Aloe Blooms

by Ms. Jen
The Last Gasp of the Aloe Blooms

20 Mar 23:58

Android co-creator’s startup hits roadblocks on its path to launching a new phone

by Jessica Vomiero
Andy Rubin

The tech startup founded by Android c0-creator Andy Rubin, Essential, may have lost a $100 million USD investment from SoftBank. 

Despite a months-long effort, the deal may go awry awry due to a conflict of interest. SoftBank’s CEO Masayoshi Son is planning to launch the Vision Fund investment group with $100 billion later this year and Apple is committing $1 billion to the fund. Anonymous sources cited by The Wall Street Journal say Son felt that dedicating $100 million to Rubin’s startup had the potential to put SoftBank on the outs with Apple.

In addition, SoftBank’s ownership of ARM could have been a factor. It’s possible that Son wanted to avoid putting off other companies that use ARM like Samsung and LG by investing in a competitor like Essential. 

It’s not clear whether Rubin’s Essential was counting on an investment from SoftBank, though some reports suspect that the tech startup has secured $100 million in funding from unnamed investors. 

MobileSyrup previously reported that Rubin is leading a 40-person team made up of former Google and Apple engineers while it finalizes its first customer-facing device, a high-end smartphone designed to compete with the likes of Apple, Samsung and Google.

Source: The Wall Street Journal Via: Android Authority 

The post Android co-creator’s startup hits roadblocks on its path to launching a new phone appeared first on MobileSyrup.

20 Mar 20:10

Pulling the Moves Together

by mikecaulfield

I’ve talked about how you have three basic moves in web investigations:

  • Check for previous work
  • Go upstream
  • Read laterally

These can be used on simple claims (“Bernie Sanders shouted ‘Death to America’ at a Communist rally”) to get an answer quickly. But the real reason I like this set of moves is that they can be combined and chained together for more complex investigations.

To show that, I recorded my screen for 50 minutes while I looked into the claim that millions may die of cancer due to the Fukushima reactor meltdown. As I went upstream I found there was no there there. There was literally no source to this information. About 15 minutes into the research I decided to focus on the more empirical claim that the rates of thyroid cancer in Fukushima Prefecture were hundreds of times above normal.

The thing I find when I do these investigations is it is just these moves, chained together over and over. You go upstream for a bit to find that one route is a dead end. You come back to your original document and find another route upstream. You get upstream there, but laterally reading shows you the site has no authority. You go to Google to see if Google can get you closer to the origin of the claim. You find counter-evidence to the claim. You go upstream to find the source of that counter evidence. You read laterally to assess the counter-evidence. And so on.

Here’s the video, sped up by a factor of three and re-narrated to make it (slightly) less boring:

You can look at the resulting page. It’s a really drafty writing job, but it’s a wiki, so feel free to sign up, log in, and make it better. 😉

There’s a lot of domain knowledge I have here that an average student might not. I helped develop statistical literacy guidelines and taught a n introductory class on statistical literacy and health for years, so I already know quite a bit about issues caused by global screening for cancer. I recognize the journal Science as a giant in the field, and gravitate to that link in the Google results because of that knowledge. But those issues aside, what is most interesting to me is that a complex investigation looks like many simple investigations chained together. When you see that in a literacy context, it’s usually good news.


20 Mar 20:10

Sony Xperia L1 Announced With 5.5-Inch Display, 13MP Camera

by Evan Selleck
Sony has announced a new mid-range handset, which will arrive in the United States in the early part of 2017. Continue reading →
20 Mar 20:10

Workflow’s New File and Ulysses Actions

by Federico Viticci

In a seemingly minor 1.7.2 update released over the weekend, the Workflow team brought a few notable file-based changes to the app.

Workflow's existing support for cloud storage services has been expanded and all file actions have been unified under a single 'Files' category. You can now choose files from iCloud Drive, Dropbox, or Box within the same action UI, and there are also updated actions to create folders, delete files, and get links to files. Now you don't have to switch between different actions for iCloud Drive and Dropbox – there's only one type of File action, and you simply pick a service.

Interestingly, this means that Workflow can now generate shareable links for iCloud Drive files too; here's an example of a workflow to choose a file from the iCloud Drive document provider and copy its public link to the clipboard. (Under the hood, Workflow appears to be using the Mail Drop APIs for uploads. These links aren't pretty, but they work.)

There's also a noteworthy change for Ulysses users. Workflow now allows you to easily extract details from Ulysses sheets using their ID. After giving Workflow permission to access your Ulysses library (which, unfortunately, still has to be done using a glorified x-callback-url method), you'll be able to chain Workflow and Ulysses to, say, get the Markdown contents of a document, extract its notes, or copy its title to the clipboard. The new 'Get Ulysses Sheet-Get Details of Ulysses Sheet' combo makes Ulysses automation much easier and faster.

If you work with files in Workflow on a daily basis, and especially if you're an iCloud Drive user, you'll want to check out the new actions and rethink some of your existing workflows. You can get the latest version of Workflow here.


Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it’s also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it’s made in Italy.

Join Now
20 Mar 20:09

WebVR and AFrame Bringing VR to Web at the Virtuleap Hackathon

by Sean White

Imagine an online application that lets city planners walk through three-dimensional virtual versions of proposed projects, or a math program that helps students understand complex concepts by visualizing them in three dimensions. Both CityViewR & MathworldVR are amazing applications experiences that bring to life the possibilities of virtual reality (VR).
Both are concept virtual reality applications for the web that were generated for the Virtuleap WebVR Hackathon. Amazingly, nine out of ten of the winning projects used AFrame, an open source project sponsored by Mozilla, which makes it much easier to create VR experiences.. CityView really illustrates the capabilities of WebVR to have real life benefits that impact the quality of people’s daily lives beyond the browser.

A top-notch batch of leading VR companies, including Mozilla, funded and supported this global event with the goal of building the grassroots community for WebVR. For non-techies, WebVR is the experimental JavaScript API that allows anyone with a web browser to experience immersive virtual reality on almost any device. WebVR is designed to be completely platform and device agnostic and so it is a scalable and democratic path to stoking a mainstream VR industry that can take advantage of the most valuable thing the web has to offer: built-in traffic and hundreds of millions of users.

Over three months, long contest teams from a dozen countries submitted 34 VR concepts. Seventeen judges and audience panels voted on the entries. Below is a list of the top 10 projects. I wanted to congratulate @ThePascalRascal and @Geczy for their work that won the €30,000 prize and spots to VR accelerator programs in Amsterdam, respectively.

Here’s the really excellent part. With luck and solid code, virtual reality should start appearing in standard general availability web browsers in 2017. That’s a big deal. To date, VR has been accessible primarily on proprietary platforms. To put that in real world terms, the world of VR has been like a maze with many doors opening into rooms. Each room held something cool. But there was no way to walk easily and search through the rooms, browse the rooms, or link one room to another. This ability to link, browse, collaborate and share is what makes the web powerful and it’s what will help WebVR take off.

To get an idea of how we envision this might work, consider the APainter app built by Mozilla’s team. It is designed to let artists create virtual art installations online. Each APainter work has a unique URL and other artists can come in and add to or build on top of the creation of the first artist, because the system is open source. At the same time, anyone with a browser can walk through an APainter work. And artists using APainter can link to other works within their virtual works, be it a button on a wall, a traditional text block, or any other format.

Mozilla participated in this hackathon, and is supporting WebVR,  because we believe keeping the web open and ensuring it is built on open standards that work across all devices and browsers is a key to keeping the internet vibrant and healthy. To that same end, we are sponsoring the AFrame Project. The goal of AFrame is to make coding VR apps for the web even easier than coding web apps with standard HTML and javascript. Our vision at Mozilla is that, in the very near future, any web developer that wants to build VR apps can learn to do so, quickly and easily. We want to give them the power of creative self-expression.

It’s gratifying to see something we have worked so hard on enjoy such strong community adoption. And we’re also super grateful to Amir and the folks that put in the time and effort to organize and staff the Virtualeap Global Hackathon. If you are interested in learning more about AFrame, you can do so here.

The post WebVR and AFrame Bringing VR to Web at the Virtuleap Hackathon appeared first on The Mozilla Blog.

20 Mar 18:33

Ad Agencies and Accountability

by Ben Thompson

It’s never a good thing when a news story begins with the phrase “summoned before the government.” That, though, is exactly what happened to Google last week in a case of what most seem to presume is the latest episode of tech companies behaving badly.

From The Times:

Google is to be summoned before the government to explain why taxpayers are unwittingly funding extremists through advertising, The Times can reveal. The Cabinet Office joined some of the world’s largest brands last night in pulling millions of pounds in marketing from YouTube after an investigation showed that rape apologists, anti-Semites and banned hate preachers were receiving payouts from publicly subsidised adverts on the internet company’s video platform.

David Duke, the American white nationalist, Michael Savage, a homophobic “shock-jock”, and Steven Anderson, a pastor who praised the killing of 49 people in a gay nightclub, all have videos variously carrying advertising from the Home Office, the Royal Navy, the Royal Air Force, Transport For London and the BBC.

Mr Anderson, who was banned from entering Britain last year after repeatedly calling homosexuals “sodomites, queers and faggots”, has YouTube videos with adverts for Channel 4, Visit Scotland, the Financial Conduct Authority (FCA), Argos, Honda, Sandals, The Guardian and Sainsbury’s.

Let me start out with what I hope is an obvious caveat:

  • I believe that free speech is a critical right, and that includes speech with which I strongly disagree (that’s the entire point)
  • That said, a right to free speech does not include a right to be heard, much less a right to monetize; anyone can host their own site and sell their own ads, but there is no right to Google’s or Facebook’s platforms or ad networks
  • To that end, it is perfectly legitimate to be upset at the fact proponents of hate speech or fake news or any other type of objectionable content are monetizing that content on YouTube or through DoubleClick (Google’s ad display network)

What is more interesting, in my opinion, is with whom should you be upset?

Google’s Responsibility

At first glance this seems like a natural place to extend my criticism of Google from two weeks ago after The Outline detailed how some of Google’s “featured snippets” contained blatantly wrong and often harmful information:

The reality of Internet services is such that Google will never become an effective answer machine without going through this messy phase. The company, though, should take more responsibility; Google told The Outline:

“The Featured Snippets feature is an automatic and algorithmic match to the search query, and the content comes for third-party sites. We’re always working to improve our algorithms, and we welcome feedback on incorrect information, which users may share through the ‘Feedback’ button at the bottom right of the Featured Snippet.”

Frankly, that’s not good enough. Algorithms have consequences, particularly when giving answers to those actually searching for the truth. I grant that Google needs the space to iterate, but said space does not entail the abandonment of responsibility; indeed, the exact opposite is the case: Google should be investing far more in catching its own shortcomings, not relying on a barely visible link that fails to even cover their own rear end.

Algorithms are certainly responsible for what is reported in The Times: ads are purchased on one side, and algorithmically placed against content on the other. So, bad Google, right?

To a degree, yes, but not completely; consider this paragraph at the end of The Times’ article:

The brands contacted by The Times all said that they had no idea that their adverts were placed next to extremist content. Those that did not immediately pull their advertising implemented an immediate review after expressing serious concern.

Were I one of these brands I would be concerned too; in fact, my concern would extend far beyond a few extremist videos to the entire way in which their ads are placed in the first place.

Ad Agencies and the Internet

Few advertisers actually buy ads, at least not directly. Way back in 1841, Volney B. Palmer, the first ad agency, was opened in Philadelphia. In place of having to take out ads with multiple newspapers, an advertiser could deal directly with the ad agency, vastly simplifying the process of taking out ads. The ad agency, meanwhile, could leverage its relationships with all of those newspapers by serving multiple clients:

IMG_0142

It’s a classic example of how being in the middle can be a really great business opportunity, and the utility of ad agencies only increased as more advertising formats like radio and TV became available. Particularly in the case of TV, advertisers not only needed to place ads, but also needed a lot more help in making ads; ad agencies invested in ad-making expertise because they could scale said expertise across multiple clients.

At the same time, the advertisers were rapidly expanding their geographic footprints, particularly after the Second World War; naturally, ad agencies increased their footprint at the same time, often through M&A. The overarching business opportunity, though, was the same: give advertisers a one-stop shop for all of their advertising needs.

When the Internet came along, the ad agencies presumed this would simply be another justification for the commission they kept on their clients’ ad spend: more channels is more complexity that the ad agencies could abstract away for their clients, and the Internet has an effectively infinite number of channels!

That abundance of channels, though, meant that discovery was far more important than distribution. Increasingly users congregated on two discovery platforms: Google for things for which they were actively looking, and Facebook for something to fill the time. I described the impact this had on publishers in Popping the Publishing Bubble:

  • Editorial and ads were unbundled; the latter was replaced by ad networks that targeted users across multiple sites
  • However, this model makes for a terrible user experience and, more pertinently, it doesn’t work nearly as well on mobile, in part because the ads are worse, but also because it’s hard to track users via cookies
  • Google and Facebook, on the other hand, track users via identity, have superior ad units (especially Facebook on mobile), and have highly invested in advertiser tools that are far superior to anyone else’s

This is why I wrote in The Reality of Missing Out that Google and Facebook would take all of the digital advertising dollars:

Both companies, particularly Facebook, have dominant strategic positions; they are superior to other digital platforms on every single vector: effectiveness, reach, and ROI. Small wonder that the smaller players I listed above — LinkedIn, Yelp, Yahoo, Twitter — are all struggling…

Digital is subject to the effects of Aggregation Theory, a key component of which is winner-take-all dynamics, and Facebook and Google are indeed taking it all. I expect this trend to accelerate: first, in digital advertising, it is exceptionally difficult to see anyone outside of Facebook and Google achieving meaningful growth…Everyone else will have an uphill battle to show why they are worth advertisers’ time.

This is exactly what has happened. Just last week the Wall Street Journal reported on eMarketer’s forecast on digital advertising:

Total digital ad spending in the U.S. will increase 16% this year to $83 billion, led by Google’s continued dominance of the search ad market and Facebook’s growing share of display and mobile ads, according to eMarketer’s latest forecast. Google’s U.S. revenue from digital ads is expected to increase about 15% this year, while Facebook’s will jump 32%, more than previously expected, according to the market research company’s latest forecast report.

Snapchat is expected to grow from its small base, but everyone else will shrink: in other words, there are really only two options for the sort of digital advertising that reaches every person an advertiser might want to reach:

IMG_0141

That’s a problem for the ad agencies: when there are only two places an advertiser might want to buy ads, the fees paid to agencies to abstract complexity becomes a lot harder to justify.

Accountability and Logistics

Again, as I noted above, there are reasonable debates that can be had about hate speech being on Google’s and Facebook’s platforms at all; what is indisputable, though, is that the logistics of policing this content are mind-boggling.

Take YouTube as the most obvious example: there are 400 hours of video uploaded to YouTube every minute; that’s 24,000 hours an hour, 576,000 hours a day, over 4 million hours a week, and over 210 billion hours a year — and the rate is accelerating. To watch every minute of every video uploaded in a week would require over 100,000 people working full-time (40 hours). The exact same logistical problem applies to ads served by DoubleClick as well as the massive amount of content uploaded to Facebook’s various properties; when both companies state they are working on using machine learning to police content it’s not an excuse: it’s the only viable approach.

Don’t tell that to the ad agencies though. WPP Group CEO Martin Sorrell told CNBC:

“They can’t just say look we’re a technology company, we have nothing to do with the content that is appearing on our digital pages,” Sorrell said. He added that, as far as placing advertisements was concerned, they have to be held to the same standards as traditional media organizations…

“The big issue for Google and Facebook is whether they are going to have human editing at this point … of course they have the profitability. They have the margins to enable them to do it. And this is going to be the big issue — how far are they prepared to go?” Sorrell said, adding they needed to go “significantly far” to arrest these concerns.

It really is a quite convenient framing for Sorrell (then again, he is the advertising expert): if only Google and Facebook wouldn’t be greedy and just spend a tiny bit of their cash windfall to make sure ads are in the right spot, why, everything would be just the way it used to be! What is convenient is that this excuses WPP from any responsibility: it’s all Google’s and Facebook’s fault.

Here’s the question, though: if Google and Facebook have all of the responsibility, then shouldn’t they also be getting all of the money? What exactly are WPP’s fees being used for? There are only two places to buy ads, so it’s not as if agencies are helping advertiser purchase across multiple outlets as they did in the past. And while there is certainly an art to digital ads, the cost and complexity is also less than TV, with the added benefit that it is far easier to use a scalable scientific approach to figuring out what works (as opposed to relying on Don Draper-like creative geniuses). Policing the placement of a specific advertising buy is also a much more human-scale problem than analyzing the entire corpus of content monetized by Google and Facebook.

Google Versus the Ad Agencies

It’s clear that Google knows that it is the agencies who are actually implicated by The Times’ report. In a blog post entitled Improving Our Brand Safety Controls the managing director of Google U.K. writes (emphasis mine):

We’ve heard from our advertisers and agencies loud and clear that we can provide simpler, more robust ways to stop their ads from showing against controversial content. While we have a wide variety of tools to give advertisers and agencies control over where their ads appear, such as topic exclusions and site category exclusions, we can do a better job of addressing the small number of inappropriately monetized videos and content. We’ve begun a thorough review of our ads policies and brand controls, and we will be making changes in the coming weeks to give brands more control over where their ads appear across YouTube and the Google Display Network.

The message is loud-and-clear: brands, if you don’t want your ads to appear against objectionable content, then get your agencies to actually do their job.

Make no mistake, the agencies know it too: there has been a lot talk about a boycott of Google, but read between the lines about what is actually going on. For example, from Bloomberg:

France’s Havas SA, the world’s sixth-largest advertising and marketing company, pulled its U.K. clients’ ads from Google and YouTube on Friday after failing to get assurances from Google that the ads wouldn’t appear next to offensive material. Those clients include wireless carrier O2, Royal Mail Plc, government-owned British Broadcasting Corp., Domino’s Pizza and Hyundai Kia, Havas said in a statement.

“Our position will remain until we are confident in the YouTube platform and Google Display Network’s ability to deliver the standards we and our clients expect,” said Paul Frampton, chief executive officer and country manager for Havas Media Group UK.

Later, the parent company Havas said it would not take any action outside the U.K., and called its U.K. unit’s decision “a temporary move.”

“The Havas Group will not be undertaking such measures on a global basis,” a Havas spokeswoman wrote in an email. “We are working with Google to resolve the issues so that we can return to using this valuable platform in the U.K.”

This boycott is not about hurting Google, because the reality is that the ad agencies can do no such thing: Google and Facebook control the users, and that means the advertisers have no choice but to be on their platforms. If Havas actually had power they would pull their ads globally, and make it clear that the boycott was permanent absent significant changes; the reality is that the ad agencies are running a PR campaign for the benefit of their clients who are rightly upset — and, as noted above, were until now completely oblivious.

Taking Responsibility

To be clear, I’m not seeking to absolve Google and Facebook of responsibility, even as I recognize the complexities of the challenges they face. Moreover, one could very easily use this article to make an argument about monopoly power, which is another reason for Google and Facebook to address this problem before governments do more than summon them.

Advertisers and ad agencies, though, should be accountable as well. If ad agencies want to be relevant in digital advertising, then they need to generate value independent of managing creative and ad placement: policing their clients’ ads would be an excellent place to start. If The Times can do it so can WPP and the Havas Group.

Big brands, meanwhile, should expect more from their agencies: paying fees so an agency can take out an ad on Google or Facebook without taking the time to do it right is a waste of money — and, when agencies are asleep at the wheel as The Times demonstrated, said spend is actually harmful.

Above all, what Sorrell and so many others get so wrong is this: the Internet is nothing like traditional media. The scale is different, the opportunities are different, and the threats are different. Demanding that Google (and Facebook) act like traditional media companies is at root nostalgia for a world drifting away like the smoke from a Don Draper cigarette, and it is just as deadly.

Editor’s Note: In the original version of this article I conflated media buying and creative agencies and their associated fees; the article has been amended. AdAge has a useful overview of commissions and fees here

20 Mar 18:32

As Uber president Jeff Jones quits, CEO Travis Kalanick ignores the toxic context

by Josh Bernoff

The bad news at Uber keeps on coming. This weekend, its President of Ridesharing and second-in-command, Jeff Jones, quit after only six months. This was an opportunity for Uber’s founder and CEO, Travis Kalanick, to take responsibility for problems that contributed to this departure. He failed. The context for the Jeff Jones departure First, some background. … Continued

The post As Uber president Jeff Jones quits, CEO Travis Kalanick ignores the toxic context appeared first on without bullshit.

20 Mar 18:32

Android 7.1.1 now available to OnePlus 3 & 3T owners via Oxygen OS 4.1.0 OTA

by Igor Bonifacic

OnePlus has released its latest OxygenOS update.

The incremental update, version 4.1.0, brings the OnePlus 3 and 3T to the latest version of Android, 7.1.1, and installs Google’s latest monthly security patch.

It also adds a number of new features specific to OxygenOS, including improvements to Wi-Fi and Bluetooth connectivity.

One other major update is a revamped video electronic image stabilization system. According to a spokesperson from the company, OnePlus “considers the video stabilization on par with the Pixel and better than the iPhone.”

As with all OTAs, OnePlus is rolling out Oxygen OS 4.1.0 to users in incremental batches. If you haven’t received a notification prompting you to upgrade to 4.1.0, you can manually check for the update by navigating to your smartphone’s setting menu and tapping “System updates” at the bottom of the list.

Source: OnePlus

The post Android 7.1.1 now available to OnePlus 3 & 3T owners via Oxygen OS 4.1.0 OTA appeared first on MobileSyrup.

20 Mar 18:31

Bike picture

by michaelkluckner

What? No bike picture for several days? Here’s one from 1915.

OLYMPUS DIGITAL CAMERA


20 Mar 18:31

Starting a Website With Setapp [Sponsor]

by John Voorhees

This week, MacStories is sponsored by MacPaw, makers of Setapp.

Setapp is a subscription service for Mac apps that is a great place to start if you’re launching a blog. There’s more to starting a website than having a good text editor, though Setapp has one of the very best of those in Ulysses. Tools to organize your thoughts, focus your efforts, publish your articles, and manage your business are just as critical. Setapp has some of the very best apps in each category for just $9.99 per month.

Writers will appreciate the inclusion of iThoughtsX in Setapp for developing story ideas. When it’s time to start writing, utilities like HazeOver, which obscures windows other than the one in which you are working, and Be Focused, for Pomodoro-style timed writing sprints, are great options to keep you focused. When you’re finished writing, Markdown users will appreciate having Marked to preview how posts will look before they are published.

Setapp also includes apps to help you create and maintain your site, like Rapid Weaver for site design and Blogo for publishing your posts. To keep your business and productivity on track, Setapp offers Taskpaper, a plain-text task manager that packs lots of power under the hood and Timing to automatically track your work.

Finding the tools that fit with your work style is time consuming and expensive. Setapp reduces the friction with a highly-curated library of excellent apps at an affordable monthly price.

Our thanks to MacPaw and Setapp for sponsoring MacStories this week.


Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it’s also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it’s made in Italy.

Join Now
20 Mar 18:31

A Scathing Critique of Transit Planning in Toronto

by pricetags

Tamim Raad draws attention to the comments on Toronto transit planning by those who were there in the ‘golden age,’ in this Globe and Mail article:

TO transit

Toronto’s transit system was once such a wonder that, even into the 1980s, people came from around the world to study how it planned infrastructure projects, how it executed them and how it operated.

That so-called “golden age” also produced transit experts so revered, they got to travel the globe in return. For some, their views have been valued well past retirement age – though not so much in their hometown.

Three of them – Richard Soberman, Ed Levy and David Crowley – recently gathered for lunch and a gab. The Scarborough subway, which is to be voted on again March 28, was not the focus, but it came up often.

“We have to be careful; this idea there was a golden age is a bit of myth,” says Dr. Soberman, former chair of civil engineering at the University of Toronto and lead author of many seminal transportation reports dating to the early 1960s. “We did very good things – on time, on budget – but we made big politically driven errors back then, too. Building a subway [Spadina] on an expressway median was a huge one. Putting the Queen subway on Bloor has turned out to be a mistake.”

“Precisely,” says Mr. Levy, jumping in. Mr. Levy, a planner, engineer and author of Rapid Transit in Toronto, A Century of Plans, Projects, Politics and Paralysis, says that great cities that have been able to sustainably expand subways kept building from the middle out (and they didn’t tunnel in low-density areas).

By not doing Queen right after Yonge, “we missed a crucial starting point for network-building. We’ve never been able to get back to a logical order,” Mr. Levy says. “Call it the Queen line, relief line, whatever, the whole GTA has needed this piece of infrastructure for decades, but politicians keep wasting scarce capital on frills and vote buying.”

“Toronto’s biggest transit problem,” says Mr. Crowley, who specializes in data analysis, travel market research and demand forecasting, “is we’ve overloaded core parts of the subway. We’d basically done that on lower Yonge 30 years ago, when I was still at the TTC. We have to relearn the importance of downtown to the whole region, the whole country. We’re in danger of killing the golden goose.”

Noting that trains from Scarborough and North York are often full before crossing into the old city, Mr. Crowley says that, “data and demand patterns are telling us the stupidest thing we could do is make any of our lines longer [before putting another subway through the core].”

“Much as I like the Eglinton Crosstown idea, and it’s overdue, too,” Mr. Levy says, “I fear what it will do to Yonge-line crowding. Again, the sequence is so wrong.”

Are bureaucrats shirking their responsibility to speak truth to power?

“We sure needed [TTC chief executive] Andy Byford to be blunt about this Scarborough subway plan,” Mr. Levy says. “He should have spoken up.”

Might the reticence be what some call “the Webster effect”? (Mr. Byford’s predecessor, Gary Webster, was fired for objecting to then-mayor Rob Ford’s insistence the entire Eglinton Crosstown go underground).

Unwillingness to speak up isn’t new,” Dr. Soberman says, citing pressure from North York politicians in the early 1970s that spurred two well-regarded TTC executives to vote for the Spadina subway in the expressway corridor “even though they knew only idiots would think it was a good idea.”

The difference is, he says, “back then politicians listened, even if they didn’t always take our advice. They respected facts. Now they only want confirmation of their preconceived ideas, and too many people [bureaucrats and private-sector consultants], who should be providing objective professional advice are playing along with the game.”

“On Scarborough,” Mr. Levy says, “you won’t find a single independent transit professional who can support this, but they won’t say so publicly. The three of us can say this stuff without recrimination; we’re retired.”

“The minute the politicians speak,” Mr. Crowley says, “the civil service and the consulting community are happy to say, ‘Oh, that’s a great idea. Yes, let’s study that.’ I started to see this trend in the 1980s at the TTC. I’d raised serious, fact-based concerns about Sheppard-subway ridership forecasts and the role of the project. It upset people. I was told, ‘You’re never supposed to do that – you have to play along.’ “That’s when I knew it was time to get out,” says Mr. Crowley, who went on to a career with international private-sector firms. “This Scarborough boondoggle, if we were talking about gas plants, it could bring down a government, but transit is ‘special’ for reasons I don’t understand.”

We’ve also overestimated the potential of these sub-downtowns, especially on jobs,” Mr. Levy says. “It’s twisted our spending priorities.”

“Transportation planning has become a bullshit field,” says Dr. Soberman. “A civil engineer wouldn’t say a bridge is going to be safe if his calculations show it might fall down, but a transportation planner can say anything. There’s no downside other than you waste public funds.”

“And the more we waste public funds, the harder it is to raise tax revenue for transit needs,” Mr. Levy says. “We’ve badly underfunded transit, but people don’t trust politicians to spend money well. When was the last time we did anything good? The Kipling and Kennedy extensions? That’s nearly 40 years ago. Most people recognized Sheppard was a mistake, but people who learned from it are ignored. It’s often impossible to even get good ideas considered. Politicians have a role to play, but …”

“It’s always been political – always will be – but we need to get smarter about where politicians join the process,” Dr. Soberman says. “If you don’t generate good ideas, you’re guaranteed bad results. If you generate good ideas and they’re ignored, you won’t do any better. Current politicians are comfortable ignoring the people most likely to generate the best ideas. And the media, you guys, haven’t always helped. This subway-versus-LRT debate was simplistic and maddening. Scarborough deserves better transit, but the best options aren’t even being considered.” (Dr. Soberman would simply buy new rolling stock for the SRT and rebuild a bend to accommodate new vehicles.)

“Maybe we’re part of the problem,” Mr. Crowley says. “If the professionals had done a better job diagnosing problems, identifying prescriptions and educating politicians and the public on issues and options, politicians wouldn’t have moved into the vacuum.”

Getting in the last word, Mr. Soberman says, “too many people in positions of power don’t seem to know what they don’t know. Whether it’s at the province and Metrolinx or at the city and TTC, if we don’t figure out new governance models, we’ll never regain the public trust and Toronto will suffer for generations.”


20 Mar 18:31

Video: Floris van Eck’s #mexsession on designing future imaging experiences

by Marek Pawlowski

Floris van Eck gives an expansive look at the future of imaging experiences, drawing on his work for the likes of Canon and its Silicon Valley-based accelerator project, as well as his personal interests in the maker movement, cyborgs and the singularity.

If you can’t see the video embedded above, click here to watch this MEX talk on Vimeo.

Insights

  • The scarcity of captured images once gave them intrinsic value. However, technology has made image capture universal, with 1.5 billion new images recorded daily. Clay Shirky referred to it as: “Largest expansion of human expression in history.”
  • The human relationship with images evolves constantly, from the first Stone Age wall paintings paintings to the introduction of perspective in The Renaissance to today’s developments in stereo 3D. However, the obsession with capturing moments has remained constant.
  • New camera form factors are creating new perspectives and enabling new types of creativity. For instance, cameras clipped onto clothing can capture a series of images showing an overall atmosphere of someone’s life. Also, rugged cameras provide new views of extreme sports and 360 degree cameras capture scenes which would previously have been missed.
  • Visual is the dominant form of communication among Millennials. For instance, images sent via Snapchat and Vines.
  • The cadence of communication is increasing. Images remain interesting for shorter periods, partly because new ones emerge to replace them and partly because attention spans are falling. For instance, two hour movies gave way to ten minute Youtube videos, which in turn are giving way to six second Vines.
  • Selfies represent a change in control of self-image. Previously people had to rely on others to capture their image. The style of selfies varies around the world according to cultural nuances of how people like to be perceived.
  • We do not see what we see, we see who we are. Seeing is never a passive act, the mind introduces filters which alter our perception. Perhaps fear of surveillance will give way to a recognition computers are more objective than humans?
  • Examples of new imaging products include Ricoh’s Theta, the Frontback app, Google’s Project Tango and Global Forest Watch.
  • As the quantity of stored images grows, so do the opportunities to derive new data from them. Image analysis remains in its infancy, but may potentially uncover new information from existing images.
  • The world is increasingly a place where we watch each other being watched.

Recorded at MEX, March 2014

20 Mar 18:30

Freedom Mobile launching new ‘Ready To Go, South’ roaming add-on on March 23

by Ian Hardy

Freedom Mobile has been busy expanding its new LTE network and also strengthening its device lineup by adding the Samsung Galaxy A5 (available now) and the LG G6 (available April 7th) — which are both Band 66 LTE-compatible.

Freedom will also be adding to its roaming options and coming out with the “Ready To Go, South” add-on for $20 per month on March 23rd.

Included in the “South” roaming plan is 2,400 mins, unlimited text, and 1GB of data. As for the locations, here’s a complete list of destinations that are covered with the Ready To Go, South add-on:

Anguilla, Antigua & Barbuda, Aruba, Barbados, Bermuda, Bonaire, British Virgin Islands, Cayman Islands, Costa Rica, Curacao, Dominica, El Salvador, Grenada, Guatemala, Guyana, Haiti, Honduras, Jamaica, Mexico, Montserrat, Nicaragua, Panama, St. Kitts & Nevis, St. Vincent & Grenadines, St. Lucia, Suriname, Trinidad and Tobago and Turks & Caicos.

In addition, Freedom will also be renaming the current US roaming add-on to “Ready To Go, US” on the same date on March 23rd, for the same price of $15 per month. Freedom’s roaming partner in the United States is AT&T.

Freedom Mobile currently has 1,052,758 wireless subscribers.

The post Freedom Mobile launching new ‘Ready To Go, South’ roaming add-on on March 23 appeared first on MobileSyrup.

20 Mar 18:30

Apple is reportedly testing AR features for the next iPhone

by Jessica Vomiero

It seems like the next iPhone could incorporate augmented reality capabilities. Based on recent reports from Bloomberg, Tim Cook is getting serious about bringing AR features to Apple’s most successful device.

Apple has reportedly begun work on several AR products including spectacles that connect wirelessly to the iPhone and beam content like movies and videos through the lenses.

While the spectacles are still a ways off, other AR features could show up in the iPhone sooner. Analysts have predicted that the global market for AR products will increase by 80 percent, or $165 billion by 2024.

An AR-enabled iPhone wouldn’t be the first time a manufacturer has experimented with AR potential in a smartphone. Last year, Lenovo introduced the world’s first Tango-enabled smartphone, the Phab 2 Pro.

Project Tango is a Google initiative to bring AR features to smartphones, though there’s been little word of developing the platform further since then.

Apple has yet to make any announcements regarding how AR features could be incorporated into the iPhone in the future.

Source: Bloomberg 

The post Apple is reportedly testing AR features for the next iPhone appeared first on MobileSyrup.

20 Mar 18:23

Google Scholar is a serious alternative to Web of Science

files/images/h-index.JPG


Anne-Wil Harzing, LSE Impact Blog, Mar 23, 2017


I agree with this assessment. "Commercial databases such as ISI and Scopus have  systematic  errors as they do not include many journals in the social sciences and humanities, nor have good coverage of conferences proceedings, books or book chapters." They are, in a word, biased toward traditional scientific publications (which is also where they make their money). It makes a difference to me.  According to Scopus my  h-index id 5.  According to Google Scholar my h-index is 26. That's a pretty large variance in the estimation of my academic impact. Via gsiemens.

[Link] [Comment]
20 Mar 18:21

Samsung Launches Bixby, Its Answer To Siri & Alexa

by Ashlee Kieler
mkalus shared this story from Consumerist.

Move over Siri, Cortana, and Alexa, there’s a new voice-controlled artificially intelligent assistant in town: Samsung’s Bixby.

Samsung announced Monday that its not-so-secret assistant Bixby will launch later this month, living inside the soon-to-be unveiled Galaxy S8 smartphone.

Unlike AI assistants from Apple, Google, and Alexa, Samsung says Bixby won’t just be a voice answering questions. Instead, the company claims the service is “fundamentally different,” offering a “deeper experience” by serving as a guide to customers’ phones, having the capability to support nearly all tasks that can be performed through Bixby-enabled apps.

Samsung says that new devices will feature a dedicated Bixby button on the side. The company believes this will alleviate confusion on how to activate the system.

For example, the company says in a blog post, instead of taking “multiple steps to make a call – turning on and unlocking the phone, looking for the phone application, clicking on the contact bar to search for the person that you’re trying to call and pressing the phone icon to start dialing – you will be able to do all these steps with one push of the Bixby button and a simple command.”

As for using Bixby within apps, users will be able to call on the assistant at any time. Once Bixby is activated, it will then understand the current context and state of the application and will be able to carry out the current work-in-progress continuously.

Samsung adds that over time, Bixby will be “smart enough” to understand commands with incomplete information and execute the tasks, meaning you won’t have to memorize a specific commands to use the service.

“Bixby is the heart of our software and services evolution as a company,” Samsung said, adding that the service will eventually expand to other appliances from the company, including air conditioners and televisions.

For now, the company says that when the Galaxy S8 launches it will come with several pre-installed apps that will work with Bixby.





20 Mar 18:21

Uber President Leaving Company After Just Six Months

by Ashlee Kieler
mkalus shared this story from Consumerist.

It’s been a rough couple of months for Uber, with the brief-but-viral #DeleteUber campaign, and the company’s CEO being caught on camera berating one of his own drivers. Now comes news that Uber President Jeff Jones is exiting the company after less than a year on the job.

Recode reports that Jones, who joined the company just six months ago, decided to leave the company over a difference in Uber’s approach to leadership.

“I joined Uber because of its Mission, and the challenge to build global capabilities that would help the company mature and thrive long-term,” Jones tells Recode. “It is now clear, however, that the beliefs and approach to leadership that have guided my career are inconsistent with what I saw and experienced at Uber, and I can no longer continue as president of the ride sharing business.”

While Jones didn’t point to specific incidents that lead to his departure, sources tell Recode the decision to leave was closely related to the ride-hailing company’s recent public controversies, including a blog post by a former female engineer that accused the company of fostering an environment of sexism and harassment.

Additionally, the company has also come under fire for its reported use of a tool that allows it to avoid regulation and law enforcement. The company has since said it would stop using the tool.

Sources tell Recode that Jones simply doesn’t like conflict, and that was apparently brewing at the ride-hailing company.

CEO Kalanick also confirmed Jones’ exit in a note to staff, obtained by Recode.

The letter suggests that Jones, who previously worked for Target, made the decision to leave after Kalanick announced that he would look to hire a Chief Operating Officer.

“It is unfortunate that this was announced through the press but I thought it was important to send all of you an email before providing comment publicly,” Kalanick wrote, noting that in six months, Jones had made an “important impact” on the company.

During his short tenure with the company, Recode reports that Jones spent time meeting with drivers to determine what the company could do better.