Shared posts

10 Sep 15:04

Currying is Delicious

by Julie Moronuki

I had never wanted to be a programmer exactly, but as I was introduced to Haskell, certain aspects of it appealed to me and I grew to love it. I present them here to help me organize my own thoughts and remember why Haskell is fun. One of the things I most enjoy in life is when a bunch of facts that you usually learn independently and seem separate can be fitted together into a tidy, coherent package or progression. Haskell offers a lot of opportunities for this.

Today I’d like to share with you the good news about currying. Most of what follows is in the book and there are approximately 10,000 blog posts about currying already, so why write one more? Because I’d like to take it a step farther than most people do.

The joy of currying for me was in finding that there was consistency in how kinds work – kinds being the types of types – such that I could know how some functors would work based on what I already knew.

Maybe I take joy in strange things.

The Function Datatype

It took me some time to come to grips with the idea that a lot of types aren’t just types; they’re type constructors and function-like in that they must be applied to some argument to construct a value of that type. It took me a bit longer to understand that the function arrow you see in type signatures is also a type constructor. It doesn’t look or act precisely like other type constructors, but don’t let that fool you.

The (->) type constructor is a product type with two parameters and no data constructors:

data (->) a b

We have two polymorphic type parameters here, a and b. A function takes an a argument and returns b. It has no data constructors, but it does construct a value – constructing a function constructs value of type a -> b.

But these two parameters only allow for one input and one output. So it must be true that Haskell functions take one argument and return one result.

We don’t know anything about our parameters, though, so they can be any type – including more functions. They may be different types, or they may be the same type. In other words, one function could take a function as the input argument and return a function as the output. If it does one or the other or both, we have a higher-order function.

Currying takes advantage of this ability of functions to return more functions. We know that all functions take one argument and return one result, but we also need the ability to write multiparameter functions. Currying, which happens by default in Haskell, allows us to write multiparameter functions that are really nested single-parameter functions.

The (->) associates to the right so that this:

function :: a -> b -> c -> d

that appears to have three arguments is really:

function :: a -> (b -> (c -> d))

Applying the function to an a results in another function and so on.

So, any “multiparameter” function is a series of nested one-argument functions, and any of those arguments could theoretically be functions. It’s this default currying that enables the cool thing called partial application. We’ll look at that in a moment.

Higher-Order Functions

Right, so the (->) type constructor has two parameters, and either argument can itself be a function. When we pass a function in as input to a function, we write the type like this:

function :: (a -> b -> c) -> b -> a -> c

with parentheses around that first input that is itself a function. If we didn’t, it would associate to the right, and we’d end up with a mess. Parenthesizing it means that function is one argument to our larger function. The right associativity and currying still apply, so in some way, what we have is:

function :: (a -> (b -> c)) -> (b -> (a -> c))

but this is done by default.

One of the important points about functional programming is supposed to be that functions are first class. Since Haskell is my first language, I suppose I take it for granted and never understood the significance (I get why higher-order functions are useful and good, but it just seems so bloody natural to me). But what it means is what we’ve seen: functions can be passed as arguments or returned as values. They may also be stored in variables or data structures, like other values.

Currying and Partial Application

Partial application and why it’s useful is covered quite well by a number of other blog posts so I’ll try not to go on too long about it, and anyway I want to get to the exciting stuff soon.

While the function type associates to the right, function application associates to the left. So,

f x y
-- is really
(f x) y

where f is the function and x and y are the input arguments.

When we partially apply a function, we see a reduction in the parameters of the function’s type signature:

(+) :: Num a => a -> a -> a

-- partially applied
Prelude> let add5 y = 5 + y
Prelude> :t add5
add5 :: Num a => a -> a

Our new function, add5, will accept one input argument and return a result. Not terribly exciting, but, in fact, this gives us a way to define any number of useful functions, such as:

sum = foldr (+) 0
reverse = foldl (flip (:)) []

In both of those, we’ve partially applied folding functions to create functions that take one list input, without having to rewrite the other arguments over and over. Pretty sweet.

We can even compose partially applied functions:

(take 5 . filter even . enumFrom) 4

enumFrom takes one input and generates a list, while both filter and take require lists as their second inputs, not their first ones:

take   :: Int         -> [a] -> [a]
filter :: (a -> Bool) -> [a] -> [a]

By applying them to one argument each (5 and even), they become functions prepared to accept the list generated by enumFrom as input.

Currying allows you to build more specialized functions via partial application, so the order of arguments is a design concern with functions you’ll want to reuse. For example, you’re more likely to want to build a filtering function that will filter all the zeroes out of any list it is given, which can be easily done by partially applying filter to a test for nonequality with 0, than to build one where filter is partially applied to some list first and awaits its filtering criterion. A similar idea gives us the partially applied folds that work as sum and reverse functions.

Kinds

All of that is very nice, but for me the really cool part came when I discovered that the types of types, called kinds, work the same way, and knowing this about kinds gives you quite a lot of information about how other things work, things that seem complicated and mind-bending when you approach them cold. So many things just made sense when I realized they’re a logical progression from how the function type works.

So, in case you’re not familiar with kinds already, here’s a very short rundown. Kinds are represented by stars, (*). Some datatypes, the ones that don’t take any arguments, are essentially type constants rather than type constructors, just as some data constructors are constant values (think of True and False – they don’t construct a value, really, they just are). Those datatypes, such as Bool are a concrete type, so they have one star:

Prelude> :kind Bool
Bool :: *

That tells us that the kind of Bool is just one star. It’s not a constructor; that is, it’s not a function. It’s a concrete type – like a value at the type level.

On the other hand, List and Maybe, dataytypes which must be applied to a type argument before they can construct a value of that type, are represented at the kind level with a function arrow:

Prelude> :kind []
[] :: * -> *
Prelude> :kind Maybe
Maybe :: * -> *

See, they’re functions! You apply them to an argument and construct a type.

Types that have more than two arguments have even more stars. A datatype called Either takes two parameters:

data Either a b = Left a | Right b

and the a is in the Left side of the sum type, while the b goes to the Right. Two parameters means it has three stars:

Prelude> :kind Either
Either :: * -> * -> *

To construct data of this type, then, we need to apply it to two arguments. Now, just like functions, this associates to the right, and you can partially apply it:

Prelude> :k (Either Int)
(Either Int) :: * -> *
Prelude> :k (Either Int String)
(Either Int String) :: *

Woot! It does just what we expect from watching what happened to the types of functions as we partially applied them.

And just the way there’s an interaction between argument order and likelihood of partial application for functions, there’s a similar interaction here. To understand why, let’s look at how Functor works.

The Functor will see you now

In case you’re not familiar with Functor already, a hand-wavy explanation is that it’s a generalization of a map function. Just as you can map a function over a list:

Prelude> map (<3) [1, 2, 3]
[True,True,False]

You can fmap a function over a lot of different types of structured data:

Prelude> fmap (<3) (Just 4)
Just False

But Functor needs a type that is kind * -> * – if it’s * then there’s nothing to apply a function to; if it’s * -> * -> * then there’s too much to apply a function to, so * -> * is the kind that’s just right for Functor:

class Functor (f :: * -> *) where
  fmap :: (a -> b) -> f a -> f b

The f structure that we lift the function over needs to be kind * -> *. Incidentally, the infix operator for fmap is <$> so the following are equivalent:

fmap (*8) [4, 5]
(*8) <$> [4, 5]

So, how do we get a Functor for Either? We saw above what partially applying Either does to its kind signature, and we see that fmapping over Either values looks like this:

Prelude> fmap (+1) (Left 1)
Left 1
Prelude> fmap (+1) (Right 1)
Right 2

Our function can’t be applied to whatever is inside the Left because what is inside Left is our a and that’s what we had to partially apply to make Either have the kind * -> *. When we write a Functor instance for Either (or tuples, or any datatype with two parameters) to tell the compiler how fmap should work for that type, we have to partially apply the type constructor which makes the first type argument (or arguments, if our datatype has more than two parameters) part of the structure that Functor lifts over.

Anyway, so we’re finally getting to the part that was really exciting for me, although it seems to follow so naturally from what came before, it’s almost a disappointment. Almost.

The Functor of Functions

Friends, the function type itself has a Functor instance. It also has Applicative and Monad instances, but first thing’s first.

Since the function type has two parameters, it is kind * -> * -> * so right away you know we have to apply that first argument before we can get a Functor for it.

It feels a little different from Either, though, because now it isn’t whatever type is safely contained within the Left a of our Either type that becomes part of the structure – now, it’s the first input to our function that will be part of the “structure.”

For the Functor, this only amounts to function composition. We can see this by comparing the types:

-- function composition
(.) :: (b -> c) -> (a -> b) -> (a -> c)
-- fmap
<$> :: (a -> b) -> f a      -> f b

-- we can change letters without changing the meaning

:: (b -> c) -> (a -> b) -> (a -> c)
:: (b -> c) ->     f b  ->     f c

-- f here is ((->) a)
-- that is, a partially applied function

:: (b -> c) ->      (a -> b) ->      (a -> c)
:: (b -> c) -> ((->) a)   b  -> ((->) a)   c

-- change the prefix notation into infix

:: (b -> c) -> (a -> b) -> (a -> c)
:: (b -> c) -> (a -> b) -> (a -> c)

They’re the same! Which means the Functor of functions isn’t terribly exciting! What is exciting is that something that at first seemed weird and alienating – what do you mean you fmap a partially applied function over another partially applied function? – isn’t so weird. It follows from what we already knew.

The Applicative of functions is pretty exciting, though, because it allows us to create a context in which two functions are awaiting the same input and a third function can apply to the result of both of those functions, once that input is supplied.

Waaaaat? No, for real:

Prelude> ((+) <$> (^2) <*> (+10)) 4
30

Our two partially applied functions, (^2) and (+10), are waiting for an input; the tie-fighter-looking operator between them is our applicative operator. We’re also fmapping (+) over that whole deal so that the results of both those functions will be the two arguments to the addition function. Essentially, something like this:

(+) (4^2) (4+10)
-- 16 + 14
30

Meh, what a lot of trouble. Who wants to write all that out all the time?

Of course we don’t want to do that, so we just give it a name:

function :: Integer -> Integer
function = ((+) <$> (^2) <*> (+10))

Prelude> function 4
30
Prelude> function 5
40

These trivial arithmetic examples in blog posts admittedly always seem…well, trivial. But it turns out to be a handy pattern:

function = (&&) <$> (> 14.5) <*> (< 20.1)

Prelude> function 3
False
Prelude> function 16
True

We’ll return True whenever the input is between 14.5 and 20.1 (but not equal to either).

So partial application and currying have allowed us to create a context in which we have functions sort of hanging around waiting for some input to come from somewhere in the environment, and we don’t need to keep stringing that argument explicitly through every function. The Monad and transformer versions, called Reader and ReaderT, are quite commonly used because the pattern is so useful.

Sometimes people ask us why we don’t split the book into two volumes. What it comes down to is that all the really important stuff is in the first half, but people always think that to understand Haskell, they need the second half (and they do! there’s a lot of stuff in there that would be difficult and tiresome to work out for yourself!). But much of what comes in the second half is just more complex ways to nest lambdas and apply and compose functions, which is what the first half is all about.

Many thanks to John DeGoes and Adam McCullough for their thorough, though not at all brutal, feedback.

Next post is likely to be a disquisition about disjunction.

22 Jun 21:22

Ultimate AI battle - Apple vs. Google

Yesterday, Apple launched its Worldwide Developer’s Conference (WWDC) and had its public keynote address. While many new things were announced, the one thing that caught my eye was the dramatic expansion of Apple’s use of artificial intelligence (AI) tools. I talked a bit about AI with Hilary Parker on the latest Not So Standard Deviations, particularly in the context of Amazon’s Echo/Alexa, and I think it’s definitely going to be an area of intense competition between the major tech companies.

Pretty much every major tech player is involved in AI—Google, Facebook, Amazon, Apple, Microsoft—the list goes on. Recently, a some commentators have suggested that Apple in particular will never catch up with the likes of Google with respect to AI because of Apple’s strict stance on privacy and unwillingness to gather/aggregate data from all its users. However, yesterday at WWDC, Apple revealed a few clues about what it was up to in the AI world.

First, Apple mentioned deep learning more than a few times, including specifically calling out its use of LSTM in its Messages app and facial recognition in its Photos app. Previously, Apple had been rumored to be applying deep learning to its Siri assistant and its fingerprint sensor. At WWDC, Craig Federighi stressed Apple’s continued focus on privacy and how Apple does not need to develop “user profiles” server-side, but rather does most computation on-device (in this case, on the iPhone).

However, it can’t be that Apple does all its deep learning computation on the iPhone. These models tend to be enormous and take advantage of reams of data that can only be reasonablly processed server-side. Unfortunately, because most companies (Apple in particular) release few details about what they do, we may never how this works. But we can definitely speculate!

Apple vs. Google

I think the Apple/Google dichotomy provides an interesting opportunity to talk about how models can be learned using data in different ways. There are two approaches being represented here by Apple and Google:

  • Google way - Collect lots of data from users and store them on a server in the Googleplex somewhere. Then use that data to fit an enormous model that can predict when you’ve taken a picture of a cat. As users generate more data, bring that data back to the Googleplex and update/refine the model.
  • Apple way - Build a “starter model” in the Apple Mothership. As users generate data on their phones, bring the model to the phone and update the model using just their data. Bring the updated model back to the Apple Mothership and leave the user’s data on the phone.

Perhaps the easiest way to understand this difference is with the arithmetic mean, which is perhaps the simplest “model”. Suppose you have a bunch of users out there and you want to compute the average of some attribute that they have on their phones (or whatever device). The first approach would be to get all that data and compute the mean in the usual way.

Google way

Once all the data is in the Googleplex, we can just use the formula

Google mean

I’ll call this the “Google mean” because it requires that you get all the data X1 through Xn, then sum them up and divide by n. Here, each of the Xi’s represents the ith user’s data. The general principle here is to gather all the data and then estimate the model parameters server-side.

What if you didn’t want to gather everyone’s data centrally? Can you still compute the mean?

Apple way

Yes, because there’s a nice recurrence formula for the mean:

Apple mean

We can call this the “Apple mean”. With this strategy, we can send our current estimate of the mean to each user, update our estimate by taking the weighted average of the old value and the new value, and then move on to the next user. Here, you send the model parameters out to the users, update those parameters and then bring the parameters back.

Which method is better? Well, in this case, both give you the same answer. In general, for linear models (like the mean), you can usually rework the formulas to build out either “whole data” (Google) approaches or “streaming” (Apple) approaches and get pretty much the same answer. But for non-linear models, it’s not so simple and you usually cannot achieve this kind of equivalence.

Clients and Servers

Balancing how much work is done on a server and how much is done on the client is an age-old computing problem and, over time, the balance of work between client and server seems to shift back and forth like a pendulum. When I was in grad school, we had so-called “dumb terminals” that were basically a screen that you used to login to the server. Today, I use my laptop for computing/work and that’s it. But I use the cloud for many other tasks.

The Apple approach definitely requires a “fatter” client because the work of integrating current model parameters with new user data has to happen on the phone. With the Google approach, all the phone has to do is be able to collect the data and send it over the network to Google.

The Apple approach is also closely related to what my colleagues Martin Lindquist and Brian Caffo refer to as “fusion science”, whereby Big Data and “Small Data” can be fused together via models to improve inference, but without ever having to actually combine the data. In a Bayesian context, you might think of the Big Data as making up the prior distribution and the Small Data as the likelihood. The Small Data can be used to update the model parameters and produce the posterior distribution, after which the Small Data can be thrown out.

And the Winner is…

It’s not clear to me which approach is better in terms of building a better model for prediction or inference. Sadly, we may never have enough details to find out, and will only be ablle to evaluate which approach is better by the performance of the systems in the marketplace. But perhaps that’s the way things should be evaluated in this case?

22 Jun 21:22

Ed-Tech and the Commercialization of School

I was invited to speak this evening to Alec Couros and Katia Hildebrandt’s class on current ed-tech issues, #ECI830. As part of the course, students are engaging in a “Great Ed-Tech Debate,” arguing one side or another of a variety of topics: that technology enhances learning, that technology is a force for equity, that social media is ruining childhood, and so on. Tonight’s debate: “Public education has sold its soul to corporate interests in what amounts to a Faustian bargain.” Here are some of the remarks I made to the class about commercialization and education technology.

Ed-tech is big business. I’ll start with some numbers: According to one market analyst firm, the ed-tech market totaled $8.38 billion in the 2012–13 academic year. 2015 was a record year for ed-tech investment, with some $2.98 billion in venture capital going to startups in the industry. Companies and venture capitalists alike see huge opportunities for what they insist will be a growing market: last year, McKinsey called education a $1.5 trillion industry. One firm predicted that the “smart education and learning market” will grow from $105.23 billion in 2015 to $446.85 billion by 2020. Testing and assessment are the largest category of this market. Testing and assessment remain the primary reason why schools buy computers; these are also the primary purposes for which teachers say they use new technologies in their classrooms.

We can’t talk about corporate interests and ed-tech without talking about testing. We can’t talk about corporate interests and ed-tech without talking about politics and policies. Why do we test? Why do we measure? Why has this become big business? Why has this become the cornerstone of education policy?


There’s something about our imagination and our discussion of education technology that, I’d contend, triggers an amnesia of sorts. We forget all history – all history of technology, all history of education. Everything is new. Every problem is new. Every product is new. We’re the first to experience the world this way; we’re the first to try to devise solutions.

So when people say that education technology enables a takeover of public schools by corporate interests, it’s pretty easy to look at history and respond “No. Not true.” Schools have long turned to outside, commercial vendors in order to provide goods and services: pencils, paper, chairs, desks, clocks, bells, chalkboards, milk, crackers, playground equipment, books. But rather than pointing to this and insisting that there’s always been someone selling things to schools and therefore selling to schools is perfectly acceptable, we should look more closely at how the relationship between public schools and vendors has changed over time: what’s being sold, who’s doing the selling, and how all that influences what happens in the classroom and what happens in the stories society tells itself about education. The changes here – to the stories, to the markets – aren’t merely a result of more “ed-tech,” but again, we need to ask if and how and why “ed-tech” might be a symptom of an increasing commercialization of education not just the disease.

Again, when we talk about “ed-tech,” we usually focus on recent technologies. We don’t typically consider the chalkboard, the textbook, the pencil, the window, the photocopier. When we say “ed-tech,” we often mean “computers.” But even then we don’t think of the large mainframe computers and the terminals that students were using in the 1970s, for example. Ed-tech amnesia: we act as though nobody thought about using computers in the classroom until Steve Jobs introduced the iPad, or something. Indeed, a founder of an ed-tech company was recently cited in The New York Times as saying “Education is one of the last industries to be touched by Internet technology,” to which I have to offer an important correction: universities actually helped invent the Internet. (And I want to return to this point in a minute: who do we identify – schools or businesses, the public sector or the private sector – as being the locus of ed-tech “innovation”?)

I am particularly interested in the history of education technologies that emerged before the advent of the personal or mainframe computer, before the Internet, in the early parts of the twentieth century. This is when, for example, we saw the development of educational psychology as a field and in turn the development of educational assessment. This is when the multiple choice test was first developed, as well as the machines that could grade these types of tests. To give you some dates: Frederick Kelly is often credited with the invention of the multiple choice test in 1914; the first US patent for a machine to score this type of test – that is, to detect pencil marks on paper and compare them to an answer key – was filed in 1937. IBM launched a commercial service for a “test scoring machine” that same year.

Speaking of commercial services and commercial interests then, standardized testing was already a big business by the 1920s. Enrollment in public schools was growing rapidly at this time, and these sorts of assessments were seen as more “objective” and more “scientific” than the insights that classrooms teachers – mostly women, of course – could provide. Public schools were viewed as failing – failing to educate, failing to enculturate, failing to produce career and college and military-ready students. (Of course, public schools have always been viewed as failing.) They were deemed grossly inefficient, and politicians and administrators alike insisted that schools needed to be run more like businesses. The theories of scientific management were applied to schools, and “schooling” – the process, the institution – increasingly became viewed as a series of inputs and outputs that could be measured and controlled.

Computers, in many many ways, are simply an extension of this. Learning analytics is often framed as a “hot new trend” in education. But it’s actually quite an old one. Thanks to new technologies, we do have more data now to feed these measurements and assessments.

We also have, thanks to new technologies, a renewed faith in “data” as holding all the answers: the answers to how people learn, the answers to how students succeed, the answers to why students fail, the answers to which teachers improve test scores, the answers to which college majors make the most money, the answers to which TV shows make you smarter or which breakfast cereals makes you dumber, and so on. Again, this obsession with data isn’t new; it’s rooted in part in Taylorism – in a desire for maximized efficiency (which is in turn a desire for maximized cost-savings and maximized profitability).

There’s an inherent conflict, I’d argue, between a culture that demands learning efficiency and a culture that recognizes learning messiness. It’s one of the reasons that schools – public schools – have been viewed as spaces distinct from businesses. Humans are not widgets. The cultivation of a mind cannot be mechanized. It should not be mechanized. Nevertheless, that’s been the impetus – an automation of education – behind much of education technology throughout the twentieth century. The commercialization of education is just one part of this larger ideology.

Alongside the push for more efficiency in education – through technology, through scientific management – has been a call for more competition in education. The Nobel Prize-winning economist Milton Friedman, for example, called for school vouchers in the 1950s, arguing that families should be able to use public dollars to send their children to any school, public or private – one should be “free to choose,” as he put it – and that choice and competition would necessarily improve education. During the latter half of the twentieth century, this idea of competition and of outsourcing gained political prominence. Some schools started to turn to outside vendors for remedial education – to companies like Sylvan Learning, for example. And some schools started to turn to vendors for instruction in specific content areas, such as foreign languages. By the 1990s, companies like Edison were offering “school management” in its entirety as a for-profit business. These were never able to demonstrate that they were better than traditional public schools; often they were much worse.

But as my short history here should underscore, the privatization of all or part of public schools was already well underway, in no small part because of the power of this dominant narrative: that competition and efficiency was the purview of the private sector and was something that the public sector simply couldn’t get right.

No surprise, I suppose, this is the story you hear a lot from today’s technology and education technology entrepreneurs and investors – many of whom are involved politically and financially in “education reform” efforts. It’s as I cited at the outset: there’s almost complete amnesia about the long history of ed-tech and about the role that schools have played in the development of the tech itself and of associated pedagogical practices. (LOGO came from MIT. The web browser came from the University of Illinois. PLATO came from the University of Illinois. TurnItIn came from Berkeley. WebCT came from UBC. Google’s origins are at Stanford. ) Nevertheless, you’ll hear this: “school is broken” – it’s that old story again and again. Tech companies assure us that they’ll fix it. Fixing schools requires “innovation”; “innovation” requires the private sector. “Innovative schools” are the ones that have most successfully adopted business practices – scientific management – and that have bought the most technology.

To reiterate, the problem isn’t simply that schools are spending billions of taxpayer dollars on technology. That is, the problem is not simply that there are businesses that sell products to schools; businesses have always sold products to schools. The problem is that we don’t really examine the ideologies that accompany these technologies. How, for example, do new technologies coincide with ways in which we increasingly monitor and measure students? How do new technologies introduce and reinforce the values of competition, individualism, and surveillance? How do new technologies change the way in which recognize and even desire certain brands in the classroom? How do new technologies – the insistence that we must buy them, we must use them – help to change the purpose of school away from civic goals and towards those defined by the job market? How do new technologies themselves view students as a commercial product?

When I insist that “there’s a history to ed-tech,” some people hear me say “nothing has changed.” But that’s not my message. Ed-tech in 2016 is different than ed-tech in 1916. I mean, clearly the tech is different. But the political and economic power of tech is different too. Some of the biggest names in education philanthropy are technologists: Bill Gates, Mark Zuckerberg. Former members of the US Department of Education now and in the past work for ed-tech companies or as ed-tech investors. And to close with a number that I opened with: last year, one investment analyst firm calculated that $2.98 billion had been invested in ed-tech startups. The money matters. But I’d contend that the narratives that powerful people tell about education and technology might matter even more.

22 Jun 21:22

Bitcoin Experiences – Part 2

by Martin

I’m quite fascinated by Bitcoins and in a previous post I’ve started to tell what I have learned by taking theory into practice. In the second part I continue the story as there is still a bit to tell about Bitcoin exchanges, market volatility, limitations of the system and how bad guys might be using the system to their advantage. Read on for the details.

Bitcoin Exchanges / Market Places

One way to convert Euros, Dollars or any other currency into Bitcoins is to use websites that offer such a service. For the details see the end of the previous post on the topic. While this is fast, such websites take quite a cut from the transaction. When I converted a few Euros into Bitcoin the difference to getting Bitcoins via a trade was around 5 percent.

To trade Bitcoins a market place or exchange is required where sellers can meet buyers and where supply and demand determine the price. Bitcoin.de is such an exchange and trades are made directly between a buyer who wants to exchange Euros into Bitcoins and a seller who has Bitcoins and wants to exchange them for Euros. To make sure the seller really has the Bitcoins he offers, he has to transfer them to a Bitcoin ID of the Bitcoin market place. When a sale is made the exchange will give the buyer the bank account details of the seller so he can make a bank transfer of the amount of Euros required to buy the Bitcoins. Once the seller confirms to the exchange that the money has been received, the Bitcoins are removed from his account at the exchange and put into the account of the buyer. Bitcoin.de charges half a percent to each party for its services, which is much less than what the before mentioned Bitcoin portal charges. The buyer can then move the acquired Bitcoins to one of his own Bitcoin IDs outside the exchange. This way the buyer does not have to trust the seller as the seller’s Bitcoins are held by the Bitcoin exchange. In other words, the seller can’t run away with the money and the Bitcoins. Obviously both seller and buyer have to trust in the security of the Bitcoin exchange platform. More about that later. To prevent money laundering, users have to go through a bank account validation and video authentication procedure. Bitcoin.de charges 10 euros for the pleasure.

Converting Bitcoins to Euros and Dollars

The same two methods described above can also be used to convert Bitcoins into Euros or any other currency. The fastest method is to use a Bitcoin portal that offers to buy Bitcoins at a the current market value plus a premium for their services and transfer the corresponding amount of Euros to a banking or Paypal account. If time is not of essence, using the Bitcoin exchange is a cheaper alternative but it could take some time before a buyer and seller meet and again additional time for the bank transfer to be executed. In other words if you don’t have a special bank account connected to the Bitcoin exchange (which I did not have) a transaction can take one or two days and longer if a weekend is just ahead of a bank transaction.

Volatility

Owning Bitcoins is a volatile business. While I was writing the two blog posts on this topic, the price for a Bitcoin changed from 400 to 440 Euros, i.e. 10 per cent in just two days. Changes in the other direction can happen just as quickly as the graph at Coindesk shows quite vividly. In other words, holding Bitcoins for a longer amount of time is a risky business.

Trust

While sending and receiving Bitcoins is straight forward and requires no trust between the two parties as far as the transaction is concerned, using Bitcoin portals or exchanges to convert Bitcoins into Euros or vice versa is another matter. In the case of a portal, you have to trust the portal to actually send you the Bitcoins for the money you have paid to it. In the other direction, you have to trust the portal to send you the amount of Euros for the Bitcoins you have sent to the portal. As the portal isn’t a bank it is not state regulated so if the portal goes out of business or is attacked by cyber criminals while your transaction is ongoing you could be out of luck.

When using a Bitcoin exchange for trades between buyers and sellers, Bitcoins are kept at the Bitcoin exchange until the transaction is finalized, which, as described above, can take several days. If during that time something goes wrong with the Bitcoin exchange you are again out of luck, they are not a bank so there’s no insurance to cover any loss. This might all sound theoretical but it has happened in practice before, Mt.Gox is a prominent example. Therefore select the Bitcoin exchange you want to trade on carefully.

Bitcoins And Bad Guys

In part 1 I’ve mentioned that Bitcoins are anything but anonymous. All Bitcoin IDs on which Bitcoins are deposited are part of the public ledger as are all transactions ever made. Thus if a Bitcoin ID is ever used for receiving a payment that was previously used to exchange Bitcoins into other currencies for which identification was necessary, anonymity is gone. As all transactions are also public, anonymity is also lost if another Bitcoin ID from which money was sent to this Bitcoin ID was at any point in time used for currency exchange for which an identification was necessary.

Given all that how could bad guys use Bitcoins to receive ransom and engage in money laundering activities if the authorities just have to monitor Bitcoin IDs that were used to receive a ransom? After doing a bit of research I found the answer: Mixers! Mixers are Bitcoin websites that receive Bitcoins on one Bitcoin ID and send the same amount of Bitcoins, minus a commission on another and totally unrelated Bitcoin ID. This way the transaction log is broken and anonymity can be restored. Obviously, the user of a mixer has to trust the mixer because there is nothing that would prevent the operator of a mixer to take the Bitcoins at some point in time and run. In addition the bad guy needs to ensure that the transaction process itself is anonymous which means he has to use TOR to send transactions to the Bitcoin system and to query for the result. So while there are many legitimate uses for TOR and Bitcoin mixers they can obviously be misused.

Problems And Limits

Having said all of this there are a number of problems in the Bitcoin system that one should be aware about to make informed decisions. The main ones are that that 50% of the Bitcoin mining and transaction capabilities are in the hands of two Bitcoin pools, both operated out of China. There are critics that say this is to much power over the system in to few hands. Another thing that I’ve mentioned before is that the Bitcoin system is limited by design to around 7 transactions per seconds. That is way too little and way to slow to make Bitcoins a mainstream method for real time payments. And a third thing, also already mentioned is the price volatility which makes buying, selling and using bitcoins a very speculative thing. For more details here’s a podcast I can recommend.

Summary

My takeaway from using Bitcoin to donate to a number open source projects is mixed. Without significant effort invested by using TOR and mixers it is not an anonymous payment system. But even without that extra effort, converting Euros to Bitcoins and then sending Bitcoins is either more effort than using other online payment services such as Paypal or traditional bank transfers or commissions are higher. So I can’t really see a major advantage of using Bitcoins over other methods of payment for such transactions. That would leave Bitcoins as an investment strategy. Given its volatility, however, that’s not something I’d risk with my money. It was good to have tried it out but for the moment I don’t see how Bitcoins could change the way I conduct business online.

22 Jun 21:22

What are you actually going to do about it?

There are lots of things that could be improved about the world, and many things to be unhappy about. You’ve probably got a long list of such things, like everyone else. Now, what are you going to do about them? No, really, what are you going to actually do?

Very often, the answer is: “nothing productive”. And when that’s the answer, better to focus your energies where you are prepared and able to take meaningful action. Then work on accepting the rest. Yes, this is difficult, but it’s worth doing.

Note: I’m writing this post as a reminder for myself.

The alternatives aren’t good: you could carry around malcontent and then let it out when triggered. You can vent, complain, debate, argue. You can discuss endlessly. You can develop a habit of reading things that make you mad or let you feel righteously indignant. Of course, none of these actions has much chance of meaningfully addressing the problems you see with the world, except perhaps by accident. You’re using up time and energy merely reacting to the mismatch between the state of the world as it is and the state of the world as you’d like it to be.

Meaningful action occurs when you recognize the mismatch between “the world as it is” and “the world as you’d like it to be” and then take deliberate action designed to have some impact. Maybe that deliberate action will be effective, or maybe not; either way you’ve made progress.

Deliberate action can take many forms, the key is just that it’s deliberate. You take the action intentionally, with some idea of what impact you hope it will have.

It all sounds sensible, and yet the world is just full of people who do lots of talking and not much doing. Why is that? Here’s how it can happen:

  • First and foremost, you want to be comfortable. So you get a job. Rarely does the world pay you to do exactly the things you care most about, so instead you settle for something “in the ballpark”. Maybe it’s a good job, maybe it’s pretty interesting, and you get to work with some fun people. Great!
  • Meanwhile, your job takes up a lot of time and energy. You find that when you get home, you don’t have as much energy to laser focus into meaningful action. You want to unwind a bit. And if you’ve got a family, you have very little free time outside of work as it is. With the time and energy you do have available, maybe you make some focused effort at something, but it can’t be sustained enough to have a real impact and you get discouraged.
  • Over time, you might develop into the kind of person who stops caring about all that much. After all, what impact can one person really have?
  • Or perhaps that drive to try improving some things about the world is still there for you, a little itch in your brain. But since you aren’t really doing much about that itch, you have some cognitive dissonance. You might say that solving world hunger or doing something about gender inequality is something you care about, but your actions indicate that your priorities are to be reasonably comfortable and entertained. You take some actions reactively, almost at random, related to your mental itch—somebody says something you disagree with on Twitter and you get into an argument about it. You get into long discussions on reddit. You start posting and commenting angrily on Facebook about various items in the news. Your actions might become more about convincing yourself and convincing others that you really do care, though subconsciously part of you recognizes you’re a bit of a fraud. Meanwhile, the world isn’t getting any better on account of your actions, and maybe you’re turning into one of those negative, resentful, unpleasant people who always seems to be complaining about things.

The human capacity for self-delusion is almost limitless. Don’t go down this road. Be honest with yourself. What do you think could be better about the world, and what are you going to do about it? Act accordingly.

22 Jun 21:15

Privacy will become obsolete

by Volker Weber
We have become addicted to tools that will make us targets.

There’s a process to grieving the loss of your privacy. First you deny it. Then you get angry about it. Then you try to make a deal about it. Then you get sad. And then you just accept it, because this is your future.

You cannot hide. You need these tools. You’re about to trade the annoyance of paying for the tools you need for the annoyance of constant harassment from people who will know all about you.

More >

22 Jun 21:14

Participatory Surveillance – Who’s Been Tracking You Today?

by Tony Hirst

With the internet of things still trying to find its way, I wonder why more folk aren’t talking about participatory surveillance?

For years, websites have been gifting information to third parties that you have visited them (Personal Declarations on Your Behalf – Why Visiting One Website Might Tell Another You Were There), but as more people are instrumenting themselves, the opportunities for mesh network based surveillance are ever more apparent.

Take something like thetrackr, for example. The device itself is a small bluetooth powered device the size of a coin that you attach to your key fob or keep in your wallet:

The TrackR is a Bluetooth device that connects to an app running on your phone. The phone app can monitor the distance between the phone and device by analyzing the power level of the received signal. This link can be used to ring the TrackR device or have the TrackR device ring the phone.

The other essentially part is an app you run permanently on your phone that listens out for the trackr device. Not just yours, but anyone’s. And when it detects one it posts its location to a central server:

[thetrackr] Crowd GPS is an alternative to traditional GPS and revolutionizes the possibilities of what can be tracked. Unlike traditional GPS, Crowd GPS uses the power of the existing cell phones all around us to help locate lost items. The technology works by having the TrackR device broadcast a unique ID over Bluetooth Low Energy when lost. Other users’ phones can detect this wireless signal in the background (without the user being aware). When the signal is detected, the phone records the current GPS location, sends a message to the TrackR server, and the TrackR server will then update the item’s last known location in its database. It’s a way that TrackR is enabling you to automatically keep track of the location of all your items effortlessly.

And if you don’t trust the trackr folk, other alternatives are available. Such as tile:

The Tile app allows you to anonymously enlist the help of our entire community in your search. It works both ways — if you’re running the app in the background and come within range of someone’s lost item, we’ll let the owner know where it is.

This sort of participatory surveillance can be used to track stolen items too, such as cars. The TRACKER mesh network (which I’ve posted about before: Geographical Rights Management, Mesh based Surveillance, Trickle-Down and Over-Reach) uses tracking devices and receivers fitted to vehicles to locate other similarly fitted vehicles as they pass by them:

TRACKER Locate or TRACKER Plant fitted vehicles listen out for the reply codes being sent out by stolen SVR fitted vehicles. When the TRACKER Locate or TRACKER Plant unit passes a stolen vehicle, it picks up its reply code and sends the position to the TRACKER Control Room.

That’s not the only way fitted vehicles can be used to track each other. A more general way is to fit your car with a dashboard camera, then use ANPR (automatic number plate recognition) to identify and track other vehicles on the road. And yes, there is an app for logging anti-social or dangerous driving acts the camera sees, as described in a recent IEEE Spectrum article on The AI dashcam app that wants to rate every driver in the world. It’s called the Nexar app, and as their website proudly describes:

Nexar enables you to use your mobile telephone to record the actions of other drivers, including the license plates, types and models of the cars being recorded, as well as signs and other surrounding road objects. When you open our App and begin driving, video footage will be recorded. …

If you experience a notable traffic incident recorded through your use of the App (such as someone cutting you off or causing an accident), you can alert Nexar that we should review the video capturing the event. We may also utilize auto-detection, including through the use of “machine vision” and “sensor fusion” to identify traffic law violations (such as a car in the middle of an intersection despite a red stop light). Such auto-detected events will appear in your history. Finally, time-lapse images will automatically be uploaded.

Upon learning of a traffic incident (from you directly or through auto-detection of events), we will analyze the video to identify any well-established traffic law violations, such as vehicle accidents. Our analysis will also take into account road conditions, topography and other local factors. If such a violation occurred, it will be used to assign a rating to the license plate number of the responsible driver. You and others using our App who have subsequent contact with that vehicle will be alerted of the rating (but not the nature of the underlying incidents that contributed to the other driver’s rating).

And of course, this is a social thing we can all participate in:

Nexar connects you to a network of dashcams, through which you will start getting real-time warnings to dangers on the road

It’s not creepy though, because they don’t try to relate to number plates to actual people:

Please note that although Nexar will receive, through video from App users, license plate numbers of the observed vehicles, we will not know the recorded drivers’ names or attempt to link license plate numbers to individuals by accessing state motor vehicle records or other means. Nor will we utilize facial recognition software or other technology to identify drivers whose conduct has been recorded.

So that’s all right then…

But be warned:

Auto-detection also includes monitoring of your own driving behavior.

so you’ll be holding yourself to account too…

Folk used to be able to go to large public places and spaces to be anonymous. Now it seems that the more populated the place, the more likely you are to be located, timestamped and identified.


22 Jun 21:10

Nearly impossible to predict mass shootings with current data

by Nathan Yau

Predicting mass shooting

Even if there were a statistical model that predicted a mass shooter with 99 percent accuracy, that still leaves a lot of false positives. And when you’re dealing with individuals on a scale of millions, that’s a big deal. Brian Resnick and Javier Zarracina for Vox break down the simple math with a cartoon.

Tags: probability, shootings

22 Jun 21:10

Intersectionality as a Technology

by David Banks

Alethia-Jones-Virginia-Eubanks-ed-Aint-Gonna-Let-Nobody-Turn-Me-Around-Forty-Years-of-Movement-Building-with-Barbara-Smith

The recent tragedy in Orlando should remind us, among many other things, that building solidarity and compassion across multiple identities is both difficult and necessary. It is difficult because too few people are willing or able to understand how intersecting forms of oppression can leave their mark on one’s identity. It is necessary because those forms of oppression are at their most powerful when they divide people as they hold them down. Building a politics that recognizes the unique challenges of intersecting identities while not stopping at advocating for the freedom of only that identity is the sort of critique that reminds us that organizations like The Human Rights Campaign are both a force for progressive change but ultimately an extremely limited one. I like to think of concepts like identity politics and intersectionality as inventions or technologies because it underscores how analytical concepts do work in the world. You can look at writing by radical collectives before and after these concepts were invented and see very different kinds of points being made and new approaches to activist work being tested. Thinking this way also helps us think about how and to what degree people use these concepts correctly or productively.

The Combahee River Collective started out as a chapter of the National Black Feminist Organization but eventually became a black feminist lesbian organization of its own operating out of Boston in the second half of the 1970s. The term “identity politics” was first coined in their collective statement released in 1977 which was consciously part of building a movement around intersecting forms of oppression. In the interview below, between author and black feminist Kimberly Springer and Combahee River Collective member Barbara Smith, we can see how identity politics and intersectionality were “invented” for a very particular purpose but then appropriated by the right wing to do the exact opposite kind of rhetorical work. After so much abuse, these terms get “watered down” even when they’re used by well-intentioned leftists.

Before turning to the interview I want to suggest that while Smith and Springer don’t dwell too long on the right-wing’s intentions for using terms like identity politics, I suspect the hijacking of these terms was an intentional act of sabotage (or technological appropriation [PDF]) and not a misunderstanding. The intentionality becomes more obvious when conservatives seem to “get” intersectionality better than liberals. For example, Melissa Gira Grant in Pacific Standard writing about the recent spate of anti-trans bathroom bills notes: “Same-sex marriage, straight sex outside marriage, and trans people — they see the sexual politics linking all these issues, a kind of conservative intersectionality liberals still struggle over.”

The following is excerpted from pages 53 and 54 in Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith (2014).

Barbara Smith: We meant to assert that it is legitimate to look at the elements of a combined identity that included affiliation or connection to several marginalized groups in this society. There is meaning in being not solely a person of color, not solely Black, not solely female, not solely lesbian, not solely working class or poor. There is a new constellation of meaning when those identities are combined. That’s what we were trying to say. … Black politics at the time, as defined by males, did not completely or sufficiently address the actual circumstances of real, live Black women. They just didn’t.

What the right wing meant by identity politics was that those people who are not white, not male, not straight, and not rich, it was not legitimate for them to assert anything, because they just wanted special privileges and special rights in a context of: “Enough rights already.”  … White males who are heterosexual and have class privileges—the system does work pretty well for them. There was a great resentment that theses other people, these people they considered to be marginal and undeserving of the same kind of privileges and access, they were irritated that those people were asserting that, again, it made a difference whether you were an immigrant of Muslim heritage or religious beliefs, living in the United States, and maybe even queer at the same time. They didn’t want to hear about that. …

Kimberly Springer: But it seems like there are some on the left who might let the Right co-opt the term “identity politics” by talking about the differences that they think identity politics creates. It seems like the disparagement of identity politics is something that works against having unity within a movement. So, if people are organizing that there’s value in their own situation and their own identity, how does that work with the goal of solidarity?

Barbara Smith: That was another aspect of it Because the watered-down version of identity politics was just what you described. Which was, “I’m an African American, working-class lesbian with a physical disability and those are the only things I’m concerned about. I’m not really interested in finding out about the struggles of Chicano farm workers to organize labor unions, because that doesn’t have anything to do with me.” The narrow watered-down dilution of the most expansive meaning of the term “identity politics” was used by people as a way of isolating themselves, and not working in coalition, and not being concerned about overarching systems of institutionalized oppression. That was narrow.

 

 

22 Jun 21:10

How Zendesk Onboards New Users

22 Jun 21:09

SQLite and Android N

TLDR

The upcoming release of Android N is going to cause problems for many apps that use SQLite. In some cases, these problems include an increased risk of data corruption.

History

SQLite is an awesome and massively popular database library. It is used every day by billions of people. If you are keeping a list of the Top Ten Coolest Software Projects Ever, SQLite should be on the list.

Many mobile apps use SQLite in one fashion or another. Maybe the developers of the app used the SQLite library directly. Or maybe they used another component or library that builds on SQLite.

SQLite is a library, so the traditional way to use it is to just link it into your application. For example, on a platform like Windows Phone 8.1, the app developer simply bundles the SQLite library as part of their app.

But iOS and Android have a SQLite library built-in to the platform. This is convenient, because developers do not need to bundle a SQLite library with their software.

However

The SQLite library that comes with Android is actually not intended to be used except through the android.database.sqlite Java classes. If you are accessing this library directly, you are actually breaking the rules.

And the problem is

Beginning with Android N, these rules are going to be enforced.

If your app is using the system SQLite library without using the Java wrapper, it will not be compatible with Android N.

Does your app have this problem?

If your app is breaking the rules, you *probably* know it. But you might not.

I suppose most Android developers use Java. Any app which is only using android.database.sqlite should be fine.

But if you are using Xamarin, it is rather more likely that your app is breaking the rules. Many folks in the Xamarin community tend to assume that "SQLite is part of the platform, so you can just call it".

Xamarin.Android 6.1 includes a fix for this problem for Mono.Data.Sqlite (see their release notes).

However, that is not the only way of accessing SQLite in the .NET/Xamarin world. In fact, I daresay it is one of the less common ways.

Perhaps the most popular SQLite wrapper is sqlite-net (GitHub). If you are using this library on Android and not taking the extra steps to bundle a SQLite library, your app will break on Android N.

Are you using Akavache? Or Couchbase Lite? Both of these libraries use SQLite under the hood (by way of SQLitePCL.raw, which I maintain), so your app will need to be updated to work on Android N.

There are probably dozens of other examples. GitHub says the sqlite-net library has 857 forks. Are you using one of those? Do you use the MvvmCross SQLite plugin? Do any of the components or libraries in your app make use of SQLite without you being aware of it?

And the Xamarin community is obviously not the whole story. There are dozens of other ways to build mobile apps. I can think of PhoneGap/Cordova, Alpha Anywhere, Telerik NativeScript, and Corona, just off the top of my head. How many of these environments (or their surrounding ecosystems) provide (perhaps accidentally) a rule-breaking way to access the Android system SQLite? I don't know.

What I *do* know is that even Java developers might have a problem.

It's even worse than that

Above, I said: "Any app which is only using android.database.sqlite should be fine." The key word here is "only". If you are using the Java classes but also have other code (perhaps some other library) that accesses the system SQLite, then you have the problems described above. But you also have another problem.

To fix this, you are going to have to modify that "other code" to stop accessing the system SQLite library directly. One way to do this is to change the other code to call through android.database.sqlite. But that might be a lot of work. Or that other code might be a 3rd party library that you do not maintain. So you are probably interested in an easier solution.

Why not just bundle another instance of the SQLite library into your app? This is what people who use sqlite-net on Xamarin will need to do, so it should make sense in this case too, right? Unfortunately, no.

What will happen here is that your android.database.sqlite code will continue using the system SQLite library, and your "other code" will use the second instance of the SQLite library that you bundled with your app. So your app will have two instances of the SQLite library. And this is Very Bad.

The Multiple SQLite Problem

Basically, having multiple copies of SQLite linked into the same appliication can cause data corruption. For more info, see this page on sqlite.org. And also the related blog entry I wrote back in 2014.

You really, really do not want to have two instances of the SQLite library in your app.

Zumero

One example of a library which is going to have this problem is our own Zumero Client SDK. The early versions of our sync library bundled a copy of the SQLite library, to follow the rules. But later, to avoid possible data corruption from The Multiple SQLite Problem, we changed it to call the system SQLite directly. So, although I might like to claim we did it for a decent reason, our library breaks the rules, and we did it knowingly. All Android apps using Zumero will need to be updated for Android N. A new release of the Zumero Client SDK, containing a solution to this problem, is under development and will be released soon-ish.

Informed consent?

I really cannot recommend that you have two instances of the SQLite library in your app. The possibility of corruption is quite real. One of our developers created an example project to demonstrate this.

But for the sake of completeness, I will mention that it might be possible to prevent the corruption by ensuring that only one instance of the SQLite library is accessing a SQLite file at any given time. In other words, you could build your own layer of locking on top of any code that uses SQLite.

Only you can decide if this risk is worth it. I cannot feel good about sending anyone down that path.

Stop using android.database.sqlite?

It also makes this blog entry somewhat more complete for me to mention that changing your "other code" to go through android.database.sqlite is not your only option. You might prefer to leave your "other code" unchanged and rewrite the stuff that uses android.database.sqlite, ending up with both sets of code using one single instance of SQLite that is bundled with your app.

A Lament

Life was better when there were two kinds of platforms, those that include SQLite, and those that do not. Instead, we now have this third category of platforms that "previously included SQLite, but now they don't, but they kinda still do, but not really".

An open letter to somebody at Google

It is so tempting to blame you for this, but that that would be unfair. I fully admit that those of us who broke the rules have no moral high ground at all.

But it also true that because of the multiple SQLite problem, and the sheer quantity of apps that use the Android system SQLite directly, enforcing the rules now is the best way to maximize the possibility of Android apps that break or experience data corruption.

Would it really be so bad to include libsqlite in the NDK?

 

22 Jun 21:09

When They See Behind the Curtain

by Eric Karjaluoto

Sam liked the paintings, and I liked Sam. Of all my art school instructors, he was the most fun. I know few others who can talk for three hours straight, and still hold their audience captivated.

This was my grad project, and I’d worked hard on the paintings. (You can see a few of them here.) There were 50 in total. Some were bad, most were acceptable, and I believe a few were decent. In 4th year at Emily Carr, students have some wall space to display their work. I planned to show these paintings in that space.

I was broke, so my dad helped me strip some 2x4s for the frames. I then miter cut the corners, joined them, and sanded the edges.  The inner area of each frame raised to create a platform for mounting the painting. Around that was a deeper channel of sorts, surrounded by another raised edge, to finish the frame.

There were a lot of frames, so completing them took days of work. Once ready, I painted each frame black, which helped the images feel more connected. For an inexpensive job, the frames looked alright.

Sam’s expression changed when he flipped one of my paintings over. “Oh,” he said, in disappointment—seeing that the backs of the frames were still bare wood. “You know, it would have only taken you another few minutes to paint the backs of the frames, too.” He flipped the painting right side up, and returned to talking about the paintings themselves. Bummed out, I felt like I had lost him.

Sam was right. To finish the frames faster I just left the backs untreated. In my mind, this wasn’t a problem. Most wouldn’t look at the backs of these paintings, so why did the unseen parts matter? Nevertheless, these details do matter.

The more you invest in making a product feel a certain way, the more important those small details are. No one notices that a room’s flooring doesn’t match, if the house is a disaster. However, in a space that’s well put together, even the wrong trim molding can ring discordant.

This is true in many settings. Flimsy garment hangers in a luxury clothing shop weaken the entire presentation. A button with a drop-shadow in an otherwise flat design feels misplaced. Fake drop-caps in a beautiful film trailer imply that the picture is the product of amateurs.

Fact is, you can get to 95% easily. People do this all the time. Pick up a recent issue of Communication Arts, see what the award winners are doing, and copy their approaches. Bazinga. You’re there. But, you’re not… and you know it. Although all the pieces are essentially the same, what you made doesn’t sing. This is because that last 5% is the tough part.

Anyone can copy some layout approaches or photographic tricks. Most will find a way to imitate slick animations or whiz-bangery. What’s more elusive, though, is finesse. Is the image treated in a way that makes the type above it easy to read? Is the text shaped in a visually pleasing fashion? What’s the proper ratio between the heading, copy, and note text? Is there suitable tension, making this item feel compelling?

Unlike matching some popular trend, this fine-tuning is harder to fake. That doesn’t make it any less important. Think of it this way: even the most beautiful song will make listeners wince, if played on an out-of-tune piano. To create harmony, you must hone your sensibilities. This comes with experience. The ability to achieve this touch, is a byproduct of time, care, and attention.

No one can grant you this skill. You need to develop it for yourself. To better equip yourself for this, I urge you to tune even the least critical seeming elements in your design. (This, of course, should only happen once the direction is set and agreed upon.) Figure out what common denominators your type system uses. Establish a voice for each project and stick to it. Take time to catalog and treat often forgotten screens (e.g., Error 404 pages, modals, warning messages).

While doing this, you need to catch yourself if you ever find yourself saying, “but it’s just a…”—because it never is. Every part of the presentation matters: The box you ship your product in. The packing material that keeps it safe. The documentation that accompanies the shipment.

You needn’t get carried away with any of this. Instead, just remain mindful of each element. In fact, each of these touch-points presents an opportunity. You can welcome someone when they open your box. You can hide a secret message that makes them smile, when they discover it. You can show them that you cared about every part of the project. (Whether you do or don’t says a lot about your brand.)

Your audience might not be able to verbalize what you did. They might not be able to spot every detail you put thought into. However, they can feel it. That’s what you as a designer—or company—need to remind yourself of. Your buyers don’t need to know how hard you worked to make your product perfect. That’s not their concern. They just need to believe that what you promised (verbally or implicitly) matches what you deliver.

22 Jun 21:08

App Launching on Apple Watch

by Alex Guyot

Conrad Stoll:

The significance of complications being the best way to launch apps is why swiping between watch faces is so valuable. It allows users to literally switch their context on the Apple Watch. One day this could presumably happen automatically, but at least it only takes one swipe to switch from your primary daily watch face to one with the type of information you want to have at a glance in another context.

[...]

I love using the Stopwatch and Timer apps while I'm cooking or brewing coffee, but I don't want their complications visible during the rest of the day. The ability to swipe left and bring up an entire watch face devoted to them and any other complications relevant to cooking is a game changer for me.

Prescient realization by Stoll about the implications of the new Apple Watch swipe-to-change-face feature. While Apple emphasized the Dock during the keynote as the new best way to switch between apps, maybe that crown will really go to Complications on various watch faces.

→ Source: conradstoll.com

22 Jun 21:08

Towards Vision Zero

by jnyyz

There was a press event this morning to call for the City of Toronto to take serious action moving towards Vision Zero, i.e. to eliminate pedestrian and cyclist deaths by car. Cycle Toronto organized the event and brought in representatives from Walk Toronto, Kids at Play Safe Streets, Bike Law, as well as some of the road victims’ friends a family. Their report is here.

A little context is in order for the non-Torontonians. This past Monday, the mayor announced a plan to reduce the number of people killed in or by cars, which reached an all time high of 64 last year. His stated aim of reducing these deaths by 20% was widely criticized, to the point that after just a day, he came out to say that he now aims for the eventual elimination of road deaths. However, the proposed budget for this plan has not budged from the original figure of $68M over five years (and it is questionable how much of that money was already on the table). This road safety plan is to be considered by PWIC next week.

Things getting organized. Shortly before this picture, I missed getting a shot of the acrylic podium being pulled up by cargo bike.

DSC08638

Jared and Maureen Coyle (Walk Toronto), getting set to start.

DSC08639

Kasia Briegmann-Samson talking about the death of her husband Tom Samson. Heartbreaking quote: “I hope you never have to stand in my shoes. I hope you never have to have your husband’s wedding ring handed to you in a paper bag. I hope you never have to tell your children their father was killed.”

DSC08641

DSC08642

Yu Li, a friend  of Peter Kang who was killed last June.

DSC08645

Meghan Sherwin, Executive Director of Safe Streets Kids at Play.

DSC08648

Patrick Brown of Bike Law. To his immediate right are the wife and brother of Edouard Leblanc, who was killed on the Gatineau hydro corridor. His killer just pled guilty to careless driving, and was fined $700.

DSC08649

Jared Kolb.

DSC08651

Maureen Coyle talks about Vision Zero.

DSC08654

DSC08655

Councillor Jaye Robinson, chair of PWIC, who supports Vision Zero in principle, but  obstructs the installation of bike lanes.

DSC08657

The representatives of the victim’s families being photographed for MetroNews, who is running a week long series on pedestrian and cyclist deaths.

DSC08658You can see their picture here.

The four victims represented were the same that attended a rally at Queen’s Park last year about Vulnerable Road Users’ legislation.

VRU laws are like an assault weapons ban in the US. An assault weapons ban is not going to eliminate deaths by gun, just as VRU laws are not going to keep all pedestrians and cyclists safe, but they sure would be a step in the right direction.

Enough talk. It is time for action.

Globe and Mail: “Family, friends of people killed by Toronto drivers blast city’s safety plan

CBC: “Toronto plan to improve road safety ‘very timid,’ advocacy group says”

Metro News:

Globe and Mail: “5 cities Toronto could copy to improve road safety

Toronto Star:

 


22 Jun 21:06

A Much Better Way To Use Slack (and any collaboration tool)

by Richard Millington

The wrong way to use Slack is as a substitute for email.

This is when you create separate channels based around departments (usually with one general group for everybody) and tell everyone to participate there instead.

The benefit is now everyone can see, search for, and participate in discussions.

The downside is this becomes an overwhelming amount of information, it drastically increases the time spent communicating (this is not a good thing), and everyone is invited to share opinions regardless of their experience or expertise. Your decisions change to accommodate the opinions of people who shouldn’t be influencing the outcome.

A better way to use Slack is as part of an advanced collaboration process that cuts communication, not as a substitute for email that increases it.

Here’s how:

1) Create purpose-driven channels. Don’t setup static channels, use purpose-driven channels. Don’t have a channel for the sales team, have a channel for specific leads, clients, or goal. e.g. improving the sales pipeline. Don’t have a channel for marketing, have a channel for revamping the website.

2) Only invite people when they’re needed. Only invite people to join at the exact moment you need their approval, advice, opinion, or to perform a specific goal. Once they have performed that goal, they should leave the channel.

This is the hardest part. Most people like to see every project through and chime in on every subsequent action. Resist this urge. It leads to people in dozens of channels participating far beyond their qualifications. This means that you (yes you!) will have to leave channels frequently too.

3) Ensure all the information is ready for them. Before you invite someone, make sure they have all the information they need to perform the action you want. And make sure they only have that information. Remove the irrelevant files at this stage. You can drop them back in later. Make sure they work in shared google / dropbox documents linked to the channel too. Remember the principle, the right information for the right person at the exact moment they need it. Make sure you set a deadline for the action too.

4) Set the channel purpose to the next step needed. Set the channel topic to the next step required. It should be possible for your boss to jump into any channel and see what the next step is and what’s holding things up. Using Google Docs you should be able to see the very next word that has to be written. This will save you when people get sick, take vacations, or leave the company.

5) Integrate relevant tools. Using Zapier you can do fun things. You can setup notifications from Quickbooks when invoices go out or get paid. Or have emails from specific people or a client e.g. “from:@feverbee.com” go directly into the relevant channel when you’re on vacation (far more effective than an away message). Or include notification updates from Google Docs or forms. Or have new salesforce leads / prospects sent directly to a sales person. Or add actions to the calendars of colleagues.

6) Archive documents and close the channel. Once the purpose has been achieved, ensure the documents and discussions are archived in a shared folder and close the channel down.

When this system works well, it lets you coordinate tasks, track progress, reduce time communicating, and ensure your team are working to their strengths.

Remember the goal isn’t to increase communication, it’s to reduce it.

22 Jun 21:04

Firefox 48 Beta 3 Testday, June 24th

by Paul Silaghi

Hello Mozillians,

We are happy to announce that next Friday, June 24th, we are organizing Firefox 48 Beta 3 Testday. We’ll be focusing our testing on the New Awesomebar feature, bug verifications and bug triage. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

22 Jun 21:04

Japan’s Classic (Misleading) Pro-Whaling Book: “You can’t tell us what to eat!”

by Louis Krauss

Japan’s fishing traditions have long-been one of its most important aspects, surviving hundreds of years mostly on fish as well as obscure seafood such as sea urchins, squids and eels instead of red meat. But now with the world’s human population reaching new heights, many organizations have requested Japan stop hunting certain endangered species, particularly whales. Not all Japanese are ready to give up on whaling, particularly those from Japan’s Institute for Cetacean Research (ICR), which is funded by the government and continues to support whale hunting at a smaller capacity allegedly for the sake of “research” and maintaining the tradition.

Masayuki Komatsu, one of the Institute’s members and Japan’s deputy commissioner to the International Whaling Commission (IWC), published a book back in 2001 explaining why he believes Japan’s whaling industry is necessary in our modernizing world. Broadly titled, The Truth Behind the Whaling Dispute,  Komatsu makes many claims that Japan needs to curtail certain whale populations to prevent other fish species from going extinct, and that these animals are just “part of the food chain”.

Komatsu's "The History and Science of Whales" shows you the typical Japanese whale dishes.

Komatsu’s “The History and Science of Whales” shows you the typical Japanese whale cuisines.

Komatsu’s basic point is that after a certain amount of years protecting the whales, we reach a point where whales will start competing with human fisheries and make the major human-consumed fish species go extinct—(Editor’s  note: Like Bluefin Tuna perhaps? Damn those whales. They’re ruining our sushi menus).

The only problem with that theory is most whale species don’t eat fish targeted by humans. Minke and baleen whales eat krill, while sperm whales mainly eat deep-sea squid that are unreachable and undesired by fishermen.

What I think does ring true in Komatsu’s argument is his depiction of anti-whalers as having an unflinching belief that whales should be protected because of how big and majestic they are. Unlike whales, one can travel the countryside to see pigs, cows and chickens packed by the thousands into cramped stockyards, yet there is a much more ubiquitous support for “save the whales” than “save all farm animals”. His underlying point is that because we have overprotected some whales since their closest chance of extinction in the 1960s, we as humans need to play the role of god and control the populations of certain whale species that may overtake others. According to Komatsu, it’s not as easy as letting nature restore itself:

“Misunderstanding leads the ignorant public to believe that the ‘leaving whales alone’ doctrine is the correct approach to restore proper balance in the ecosystem,” Komatsu says. “It is completely wrong to believe that ocean resources can recover to the virgin status, if left alone.”

Komatsu’s example of this is antartica’s blue whale population, which dropped from 200,000 to 500 whales in the 1960s, and since then has risen back up to 1,200. He says the population has not yet reached its original level is because minke whales have been taking all the krill in the Antarctic for themselves. Like the hundreds of movie-depicted time travelers who have to fix the past to restore the present, Komatsu says we need to “cull a considerable number of minke whales.” (Editor’s note: H.G. Wells wrote a book about this right?) 

“This is a law of nature with which mankind has interfered,” Komatsu says. “Since mankind has broken the law and skewed the balance of nature, it is a duty imposed upon us to act responsibly and bring it back to the proper balance.”

You need to really be invested in this topic to get a lot out of Komatsu’s longwinded, high school-like thesis of a book on why whales need to continue to be hunted. Later sections I haven’t discussed include: 1) why whale meat is an important source of protein that we aren’t utilizing to the fullest 2) why whale meat is more environmentally friendly than beef, which requires deforestation for farmland.  Komatsu claims  naïve environmentalists are blind to the possibilities of whale meat:

“Which is better for conservation of nature, expansion of grazing land by deforestation or sustainable utilization of a part of wildlife?” Komatsu says. “It would be a folly to discard utilization of renewable resources just for the sake of appeasing so-called environmentalists whose egotistic assertion have been disseminated through misleading TV commercials.”

I suppose we never know at what point in the near future we’ll have to stop eating beef to preserve the planet. I heard from one Japanese friend that good whale meat can be quite tasty. Komatsu argues that soon, when humans will be totaling more than 10 billion, it will be hard to keep up beef and chicken farming—since that accelerates global warming by cutting down trees for land. The excrement from farm animals pollute the water and kill off natural water plants and freshwater fish.

“Under these circumstances, can we afford to abandon the idea of utilization of whale resources? The answer is: ‘Absolutely not!'”

On the topic of the Japanese using whales as an important source of protein, Komatsu lashes out at the IWC (which he later brands as “Goblins” for shifting the original focus from “stabilizing whale oil prices” to a focus on protecting whales) by saying they totally disrespect the cultural traditions of places in Africa and Asia. He says urban civilians are fooled by environmentalists handing out pamphlets showing gorillas being eaten by essentially countering with thought ‘well, what if they’re not eating an unreasonable amount of gorillas!’ In his actual words:

“The arrogance of their assertion, ‘stop eating wild animals’ totally disregards the social structure of the people surviving on the meat of hunted animals.”

Komatsu might be right in thinking environmentalists are pushing us towards adopting developed countries’ more boring diets, food being one of the things that helps tie Japan to third world countries that offer similarly diverse foods. You can be sure in the 1960s there weren’t as many crustless, paper-white bread sandwiches at konbinis as there are nowadays, leading Komatsu to this fear of Japan being stripped of its whaling cuisine (despite the fact not many Japanese eat whale anymore. (Editor’s note: The Japanese government has over two tons of frozen whale-meat, much of it from ‘research’ whaling that it’s saving for some unspecified emergency. Whaling may not sustainable without Japanese government subsidies which begs the question—should taxpayer money be used to sustain it and do most of the Japanese people want public funds used for that purpose). As an American, I can definitely agree the internet has led to a stigma that certain foods such as Natto have been framed too negatively, and in Komatsu’s eyes I suppose whales are no different.

“No one has a right to criticize the food culture of other people. When we Japanese eat our food such as NATTO (fermented soy beans), ANKO (sweetened black beans) and SASHIMI (fresh slices of raw fish), we accept that some people may think that such food is weird (it is theit business to think so), but we would be angry if they forced us to stop eating it.”

Movies such as The Cove are able to post generally agreed ‘disturbing’ pictures of whales (and dolphins) being sliced apart, but Komatsu raises the question of “Why is it more disturbing than pig and cow slaughter houses?”

As my friend and Johns Hopkins Asian Studies Professor Yulia Frumer points out, Japan relies very heavily on foreign exports and likely harbors ill feelings for having to listen to the U.S. for what foods it can and cannot receive. I admit that Komatsu gives dozens of faulty numbers in his writing about whales, (he actually hindered his argument in “The History and Science of Whales” by mistakenly saying the estimated number of sperm whales was 200,000 when it was actually 2 million,) but I think it’s important we consider how some of Japan’s older generations feel belittled by the U.S. and Australian whaling/fishing commissions that wants to tell them what they’re allowed to catch.

“Analysis of their argument leads us to believe that they are viewing the rights of the non- human creatures as equal to the rights of men. Here, I think it is worthwhile to ponder upon what truly should be the protection of animals.”

Komatsu manages to make some logical arguments but there is a twisted logic to it and in the end, whaling in Japan is not necessarily something that the majority of the population wants and the industry is unsustainable.

So why does it keep getting funded? That’s a subject worthy of a book in itself.

 

Jake Adelstein contributed to this review. For more on the whaling issue please see this article originally published in The Daily Beast. 

I’ll Have the Whale, Please: Japan’s Unsustainable Whale Hunts

22 Jun 21:04

The Internet Doesn't Run on a Bus—Your APIs Shouldn't, Either

by anant

There is no bus in the internet. It is the caller’s responsibility to make the right call. Yes, there's a broken link or two on the internet, but then one of the two parties fixes it and the problem doesn't last too long.

The world of internal APIs looks like the figure on the right, not the one on the left. Don’t move into the future using a bus architecture. DON’T.

  • The scale of thousands of microservices and billions of internal calls rules out the monolithic architecture on the left.
  • Connectors are legacy; every service will expose clean, RESTful APIs.
  • Orchestration is pushed to code and logic within the services; what remains is much less onerous.
  • Stable interactions are less common; things change in months, not years.
  • Centralization of interaction is an antipattern

Keep your current integration architecture. Definitely. But build your future around APIs, and not integration architectures masquerading as APIs.

For more on this topic, check out the eBook, "Beyond ESB Architecture with APIs: How APIs Displace ESBs and SOA in the Enterprise."


Thoughts? Comments? Questions? Join the conversation (or start one) in the Apigee Community.

 
22 Jun 21:04

Iconic London Transport Typeface Retooled for its 100th...

by illustratedvancouver










Iconic London Transport Typeface Retooled for its 100th Birthday

Johnston has undergone a century of iterations—now Monotype is restoring the quirks of the original typeface (and breaking some rules)…

http://eyeondesign.aiga.org/iconic-london-transport-typeface-retooled-for-its-100th-birthday/

22 Jun 21:04

Refrigeration: Necessary for Better Butter?


Cold, hard butter from the fridge is really hard to spread on bread. You wind up tearing the bread and depositing the butter unevenly. Life’s rough, right?

Leave the butter out!

Anyway, the solution is ridiculously easy: Leave your butter out of the fridge. Leave it on the counter, where it’s easy to grab — and soft — whenever you have to spread it on something.

You’re probably thinking: “No way! It’s a dairy product! It’ll spoil!”

Actually, it won’t. Bacteria need moisture to flourish — and butter is made almost completely of fat. It began life as cream, and that was pasteurized, killing all the bacteria. Finally, the salt in most butter makes it even more unlikely bacteria will find a foothold.

Bottom line: You won’t get sick from butter you’ve left out for a few weeks.

You may, however, discover that it eventually tastes funny if it’s exposed to light and oxygen. Therefore, keep it in a covered, opaque butter dish.

This simple change will bring tremendous happiness to your life.

On the other hand, you’ll find yourself using more butter this way.

In other words — even more happiness.

22 Jun 21:04

Backchannel — which I contributed a post to last year — has been acquired by Conde Nast, and Steven…

by Stowe Boyd

I wonder what this means relative to Medium’s plans? Looking more and more like a publishing platform, and less like a publisher, I’d say.

Continue reading on Medium »

22 Jun 21:04

New Role as Columnist with Spacing

Spacing, Canada’s top magazine for urban issues, was one of the first media outlets to publish my articles when I started blogging and launched This City Life several years ago.  

I was recently asked to contribute to Spacing on a regular basis and am very excited to announce that I will be writing a bi-monthly column for their website. 

My first post appeared this week on the subject of Millennials and Housing Affordability. 

I will continue to share my latest articles on This City Life. Contact me if you have any story ideas or urban issues and exciting projects you want to see featured in the column.

image
image
22 Jun 21:03

Thoughts on Swift Playgrounds

by Fraser Speirs

At WWDC 2016 Apple announced Swift Playgrounds for iOS. Swift Playgrounds is an app that is connected to a larger initiative from Apple called "Everyone Can Code".

Everyone Can Code comprises the Swift Playgrounds app itself, a series of teacher and student guide books on the iBookstore and a suite of curriculum content delivered inside the Swift Playgrounds app.

I've spend the past few days at WWDC getting up to speed on Playgrounds and here is my first-cut at understanding how it all fits together.

The App

Playgrounds is genuinely a full Swift interpreter built into an iPad app. Although the demo in the Keynote focused on some simple concepts, it's not a toy or limited version of the language.

The app is comprised of two parts: the source view on the left and the live view on the right. The source view is where you type your program and the live view is where you see any output. At the same time, in the right margin of the editor, you can see intermediate results as they are calculated. This has been part of Swift Playgrounds on macOS for some time.

The app supports two kinds of files. The first is the Playground file that you can create using Xcode on a Mac. These files can be transferred to the iPad and run as-is inside Playgrounds on iOS.

The second kind of file is called a Playground Book. This is what you saw in the keynote. It's much richer and supports a nested chapter and page structure that supports navigation as well as basic assessments of success inside each page. The package format is documented online.

There are also a range of things that authors can do with each page in a Playground Book to make it easier for beginning programmers to meet success. These include hiding setup code that doesn't need to be seen by the learner, defining "editable regions" to constrain the learner to only type in certain areas and providing hint text in those editable areas.

Another feature of the app is that it freely allows import and export of Playgrounds and Books to other users via AirDrop. In my teaching experience with beginner programming environments, being able to inspect and adapt someone else's code is a very good way to build learners' curiosity and confidence - as well as a bit of competitiveness!

The app also features certain other simplifying tools for entering code. There are colour pickers, image literals and other gestures that make it easier for learners to avoid syntax errors. One simple example is that you can drag out the lower brace of a conditional statement or loop to enclose other statements and everything springs into place when you let go.

The QuickType bar above the keyboard (the area where you get text suggestions) has also been adapted to give code completion suggestions that are sensitive to the context and only let you complete legal code in the language. Authors of Playground Books can also give hints to the suggestions mechanism to constrain it to only show certain symbols or only symbols from certain packages.

It's important - and not at all obvious on first sight - to understand that Playgrounds is not in itself an authoring environment for Playground Books. When a learner works with a Playground Book, their edits to the code in the book are stored as a diff against the original content in the book. The original content is never modified, but the diffs do get transferred with the book when it's sent to other users. The reason it's done this way is that it facilitates resetting a page in the book to its original state if the user needs to. You can also reset the entire book.

The other thing that is not entirely obvious is that Playgrounds has full access to the entire iOS API. This means that there is effectively no limit on the complexity of Playground that you can build. You can use APIs like Core Location, WebKit, MapKit, Core Motion, Networking and Core Bluetooth. One of the demos given in the session on Playgrounds was of Playground Swift code controlling a Sphero robot over Bluetooth. I already did the standard daft Browser-in-five-lines-of-code trick that we used to do on the Mac.

The Swift Language

A broader question than "is the app any good?" is whether or not the Swift language itself is any good for Computer Science education. I have in my career taught children to program in Visual Basic, Ruby and Most recently Python.

As a Computer Science teacher, I need to know that Swift is a good language for learning to program with. One simplistic approach to promoting Swift in CS is simply to make the argument that kids love smartphones and apps are written in Swift therefore CS education should happen in Swift. I get the thinking behind that but it feels as zeitgeisty as the other moves that teachers make to co-opt anything that kids like and turn it into "education". Remember Second Life? And .... dare I say it .... Minecraft In the fullness of time?

I prefer to ask which specific language features in Swift make the language a good choice for learners. I challenged Apple staff this week to make that case and I came away with some good points. One of the things I particularly liked was that Swift leans toward explicitness rather than implicit or inferred behaviours.

Swift also has API design guidelines focused on expressiveness and understandability rather than terseness. It's also a relatively new language. This certainly has its drawbacks in that the language has changed substantially over the last two major revisions but there is a consistency and clarity to its approach that is sometimes missing from languages like Python.

What's Not to Like?

Swift Playgrounds is a very new app on iOS. Although it's fairly complete, there are a few things I feel that it still needs.

Firstly, I mentioned that there are ways to get the Playgrounds app to render your code by omitting and hinting certain areas. There's currently no way to get that rendered view out of the app. This is important for teaching in two ways: firstly, it would allow students to submit work to a teacher through iTunes U. The second reason is that a teacher authoring a solution would be able to give a printout of a completed (rendered) version to a pupil who needed it for whatever reason - perhaps a pupil with learning difficulties for whom copying in a provided solution would represent a good achievement. Giving these pupils the full unrendered source would be overwhelming.

The bigger issue right now is that the authoring environment for Playground Books is Xcode on macOS. I'll invite you to use the fingers of maybe both hands to count the number of teachers in your area who (a) even have a Mac and (b) are au fait with Xcode.

I have seen many times that any time the phrase "you use a Mac for that" is a total and complete show-stopper in education. This was true for iTunes U before Course Manager came to iOS and it's still true for iBooks Author.

I think there might, in time, be ways to create Playgrounds and Playground Books on iOS but it will be neither easy nor convenient for some time to come. This isn't an unexpected problem but it is still a problem.

Overall, though, it's hard to find anything seriously bad to say about Swift Playgrounds except that it's an early, immature product right now. Despite that, it already has serious power under the hood and some impressive curriculum content. Can't wait to see where this goes.

22 Jun 14:53

Apple WWDC – Developer love.

by windsorr

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Developers are only increasing in their importance to Apple.

  • Apple’s WWDC conference showcased a lot of catch-up upgrades, a serious nod to the importance of China as well as some delicious eye candy for messaging.
  • More than ever before the developer was front and centre of everything that Apple does with more and more of the phone being opened up to third party apps.
  • This makes complete sense because I have long believed that Apple’s differentiation lies mostly in its ability to distribute the apps and services of third parties in an easy and fun to use way.
  • Consequently, it is of paramount importance for Apple to keep developers happy and to offer them a constant stream of new features so keep their apps fresh and earning money.
  • Apple has really distanced itself from Google Play over the last 18 months but it is in no way resting on its laurels and is doing everything to keep the environment fresh for developers.
  • WatchOS / MacOS / TvOS received incremental upgrades which addressed many of the well-known shortcomings of these platforms, moving them to be more in line with competing offerings for the same device categories.
  • As one would expect, iOS got the most attention with iOS 10 launched which will be available as a free upgrade in the autumn.
  • iOS 10 upgrades were focused in 10 areas but the ones that appeared to matter most were:
    • Messages. Apple enabled a raft of features that give the user more options in terms of expressing himself with text messages.
    • This included large emoji’s, background animations, photo editing and so on.
    • This moves Apple to the forefront of messaging, but I am be pretty sure that Messenger, Weixin and WhatsApp will quickly copy these ideas.
    • Lock Screen. Further enhancements have been made to the lock screen which improve usability but in my opinion undermine security and privacy.
    • iOS 10 now allows a whole raft of data to be accessed from the lock screen without necessarily unlocking the device which is great for usability, but also means that anyone can access that data if they pick up the device.
    • The more Apple increases what the user can do without unlocking the device, the less secure the user’s data becomes.
    • I suspect that this feature will appeal strongly to Chinese users where RFM’s research indicates that Chinese users care much less about data privacy.
    • Apple Music had a big user experience upgrade and the offering is now much more intuitive and easy to use.
    • Apple has also taken a leaf out of Spotify’s book and is offering more curated playlists for the user based on his tastes.
    • However, the user experience is the easy bit where accurately understanding users and cataloguing 40m media items is very difficult.
    • Time will tell how well Apple can do this but I think that it is still playing catch up in this area.
    • HomeKit is evolving exactly in the way that I have been expecting.
    • A new app called Home was launched that allows all of the devices in the home to be controlled from a single app.
    • This brings together all of the devices such that they can be part of a usage profile rather than individual elements.
    • For example, the user can put the house into night mode and with one click lock the doors, turn off the lights, close the blinds and so on.
    • I see HomeKit along with HealthKit as one of Apple’s key strategies to keep the iPhone differentiated long term as its edge as a developer platform will only last so long.
    • Of HealthKit there was no mention, but I still see this as early days.
  • The net result is that Apple has done enough to keep ahead of Google Play as the preferential developer platform and so there is no imminent risk of Apple losing its edge there.
  • China was also featured highly for the first time reinforcing Apple’s dependence on this market despite the fact that its services and app store do not fare very well in China.
  • None of this will solve Apple’s most pressing problem which is its lack of growth but with the valuation where it is today, I do not see this as a major problem.
  • Consequently, I still prefer Apple to Google but for share price appreciation in a 12 month window, I would look to Samsung, Microsoft or Baidu.
22 Jun 14:53

Twitter – Clouds of sound.

by windsorr

Reply to this post

RFM AvatarSmall

 

 

 

 

 

 10% of SoundCloud fixes nothing.

  • Twitter is investing $70m in SoundCloud by taking the majority of its current $100m funding round that will value the company at $700m.
  • This will give Twitter a 10% stake in SoundCloud (and I presume a seat on the board) but how this will help to alleviate Twitter’s current predicament remains a mystery.
  • Twitter remains in a strategic bind.
  • Its service is extremely effective and very well monetised but it is such a small piece of Digital Life that its monetisation potential is fundamentally limited.
  • Furthermore, I think that its service is quite niche meaning that it appeals only to a subset of users which is why its user growth has also ground to a halt.
  • I think that the answer to Twitter’s problems can be found in the Digital Life pie where Twitter has coverage of just 17%.
  • This is what I think must be addressed.
  • Twitter needs to encourage users to spend time beyond microblogging engaged with a Twitter service.
  • Unfortunately, buying a 10% stake in SoundCloud will do none of these things.
  • Media Consumption represents 10% of the Digital Life Pie and if Twitter was to adequately cover this activity with a service of its own, its revenues could start growing once again.
  • However, a 10% stake buys Twitter very little in terms of a new ability to offer a Twitter service in that segment and Twitter users are very unlikely to start rushing to the SoundCloud service as a result.
  • Furthermore, Twitter will not own nor be able to monetise any of the data that SoundCloud generates meaning that, in reality, this investment changes nothing.
  • To really own a Digital Life segment will require a bold decision to be taken with regards to which Digital Life segment it should cover combined with heavy investments to bring it to fruition.
  • This necessitates a driven and focused CEO with a dedicated, talented and stable management team.
  • With constant turnover and a part-time CEO, I just can’t see how bold decisions are going to be taken meaning that Twitter will continue drifting.
  • Twitter’s market capitalisation remains at $11bn which represents a 2016E EV / Sales multiple of 3.8x.
  • This is a very high number to pay for a company that is not growing and this is why I continue to think the shares could test $10 per share sometime this year.
  • I remain very cautious on Twitter and would prefer almost any other ecosystem player to this.
22 Jun 03:28

Pythonista 3

by Rui Carmo

Now supporting Python 3.5 and 2.7 in this new app, which is 50% off during launch.

Still the most amazing iOS development tool out there, and steadily better with every release (the numpy and plotting stuff alone is well worth it).

22 Jun 03:22

How Ten Key Developments Are Shaping The Future Of Technology-Enabled Learning

files/images/rogers-adoptions-curve.png


TeachOnline.ca, Jun 19, 2016


Here's the list:

  1. Student Expectations and Requirements Are Changing
  2. Flexibility is Shaping New Ways of Delivering Programs and Courses
  3. Competency-based and Outcome-based Learning Are Growing Quickly
  4. Technology is Enabling New Approaches to Pedagogy
  5. MOOCs are Offering Expanded Routes to the Delivery and Recognition of Learning
  6. Assessment for Learning and Assessment of Learning Are Changing
  7. Governments are Re-thinking Quality and Accountability
  8. Equity Remains a Challenge, Despite Massification
  9. e-Portfolios Are Emerging as Critical Resources for Students
  10. The Role of the Faculty Member/Instructor is Changing 

The article includes an expanded discussion of each item. The main problem with this article is the same main problem for any listicle: there's no core theme or focus, no argument or overarching explanation. But the content is generally fine.

[Link] [Comment]
22 Jun 03:22

Vivaldi

files/images/tile-stack_windows_new.png


vivaldi, Jun 19, 2016


Vivaldi is a web browser 'made for the power user'. "One of the things that makes Vivaldi unique is that it is built on modern web technologies. We use  JavaScript  and  React  to create the user interface with the help of  Node.js  and a long list of  NPM modules.  Vivaldi is the web built with the web." It looks like it's based on Chrome, and so supports important extensions (like AdBlock Plus). I'm trying  it out for a little bit. Via a nice Doug Peterson post  on browsers.

[Link] [Comment]
22 Jun 03:22

Susan Smith Nash

files/images/Screen2Bshot2B2016-06-162Bat2B1.29.182BPM.png


Susan Smith Nash, E-Learning Queen, Jun 19, 2016


Overview of Moodle cloud hosting. "The only downside that I can see," writes Susan Smith Nash, "is that it is in a beta mode, and it's possible that they may discontinue it. I hope not! But, that said, Moodle is very popular and I think that it's possible that it will be the first-choice solution of many users." See also: Moodle launches Moodle for schools.

[Link] [Comment]
22 Jun 03:20

How Council Likes Bike Lane Plans More than Bike Lanes

by dandy

To no one's surprise, council hasn't met its 10-year bike goals in the past. 

By: Jacob Lorinc

This story was originally published on the Torontoist.

bikemaponthebrain_byNess_Lee20141 Art by Ness Lee originally published in Bike lanes on Bloor on the back burner again?

Last month, the City unveiled its latest long-term plan to expand cycling infrastructure in Toronto. In an effort to create a connected network for cyclists across the city, the proposed 10-year project identifies 525 kilometres of potential cycling infrastructure to the tune of $153.5 million over the next decade. The majority of the infrastructure would be located on major thoroughfares—with roughly 55 km dedicated to “sidewalk-level boulevard trails” alongside some of the busier streets—while the remainder would be spread out across quieter streets.

The Public Works & Infrastructure Committee agreed on a lower price tag, $16 million per year, and council agreed to the plan in principle, although some significant bike lane recommendations were removed in the process (review the recommended bike lanes here). But the proposal has yet to go through the budget process, so the plan can’t be implemented just yet.

Given Council’s history of not following through on plans it agrees to in principle, that’s an important caveat.

Read More: How Bikes Likes Bike Lane Plans More than Bikes

Related on the dandyBLOG:
Six Bike Lanes that Could Connect the GTA
How Bike Lanes Get Installed in Toronto
Bike Spotting: Mike Layton on Bloor Bike Lanes