Shared posts

10 Sep 15:06

Today My Son Learned About Tuples

by Julie Moronuki
Tags: Haskell, kids

I’ve been working on Haskell Programming from First Principles. My son and I started an experiment to see how well he, a fifth grader who didn’t know much programming or algebra, could teach himself Haskell using that book. He reads the book, does exercises, asking questions where he needs to. I answer his questions and review with him what he has learned, to make certain it’s sinking in. Periodically, I write about the experience for those who might be interested. The first post about this experience is here. I’ll try to start updating more often.

It’s been so long since the last time I wrote about my son’s experiences learning Haskell, I feel a bit of catching up is in order. He started last April, so you’d think he’d have learned more by now than he has. But last summer our life took some strange turns and went absolutely chaotic for a few months, and we didn’t get settled down and into a new school routine until after Christmas. He’s been working on learning Haskell as part of his school work since January.

I had him start the book over. We had added the lambda calculus chapter since the last time he read the book, and I felt strongly that he should try to work through it. He had a few mild issues with terminology, but he was able to do all the exercises correctly except one. That one he completed once we worked through the point where he’d become confused. He’s a smart enough kid, who loves logic problems and math, that’s true, but I also just don’t think the principles of the lambda calculus are as difficult as we are often led to believe.

There hadn’t been much else to report. He got through the first couple of Haskell chapters without anything exciting happening. Today, though, he was in chapter 4 (Basic Datatypes) and was introduced for the first time to tuples.

I wanted to make sure he understood the differences between tuples and lists. He’s not yet at the point where he can read datatypes well, although he has the basic idea of type variables. So, he understands that [a] means all the elements of the list must be the same type, because there’s only one variable. I checked to make sure he understood that the presence of two different variables in (a, b) means that the two types can differ but do not necessarily differ. So, we fired up GHCi and played with that a bit.

Out of curiosity, I then had him start playing with the length function. I get the feeling that the result of

length [1, 2, 3]

is fairly obvious to most people, so long as they have the sense that’s a list and length will count the elements of the list.

I wanted to get a feel for his intuitions about tuples, though, so after doing some examples with lists, I had him type

length (1, 3)

and asked him what he thought the result would be. He hesitated slightly, then guessed that it would return 1.

It does.

But, you know, he doesn’t know about higher-kinded types. And he doesn’t know about Functor and Foldable. There’s no real way he could yet. So I asked him why he thought that, to understand his reasoning.

He said that in his understanding a tuple is one value; there are two items within the tuple, but it’s really one value. He entered as evidence

Prelude> length [(1, 2), (2, 4), (5, 6)]
3

If that list has 3 values, then each tuple is one.

It’s solid reasoning. In this case, it’s not quite the correct reason why the length of a pair is 1 – though, here, there is something intuitive about that, is there not? The length of a pair is 1, because it is one pair.

Of course, there are times when a list is also one value, such as

Prelude> length (Just [5, 6, 7])
1

when it is the a of a Just a, or if you had a list of lists, like the list of tuples above.

Still, it was enjoyable watching him reason through this. I can’t wait until he gets to the Functor chapter of the book and finds out the real reason his answer was correct.

If you like my writing or think you could learn something useful from me, please take a look at the book I'm writing with my coauthor Chris. There's a free sample available, too!

Posted on February 29, 2016

02 Mar 03:54

Raspberry Pi 3 on sale now at $35

by Eben Upton

Update: I did a rather poor job of collating the credits list this time. Apologies to Aravind Appajappa, Jeff Baer, Saran Kumar Seethapathi and Noumaan Shah.

Exactly four years ago, on 29 February 2012, we unleashed the original 256MB Raspberry Pi Model B on a largely unsuspecting world. Since then, we’ve shipped over eight million units, including three million units of Raspberry Pi 2, making us the UK’s all-time best-selling computer. The Raspberry Pi Foundation has grown from a handful of volunteers to have over sixty full-time employees, including our new friends from Code Club. We’ve sent a Raspberry Pi to the International Space Station and are training teachers around the world through our Picademy program.

In celebration of our fourth birthday, we thought it would be fun to release something new. Accordingly, Raspberry Pi 3 is now on sale for $35 (the same price as the existing Raspberry Pi 2), featuring:

  • A 1.2GHz 64-bit quad-core ARM Cortex-A53 CPU (~10x the performance of Raspberry Pi 1)
  • Integrated 802.11n wireless LAN and Bluetooth 4.1
  • Complete compatibility with Raspberry Pi 1 and 2
Raspberry Pi 3 Model B

Raspberry Pi 3 Model B

BCM2837, BCM43438 and Raspberry Pi 3

For Raspberry Pi 3, Broadcom have supported us with a new SoC, BCM2837. This retains the same basic architecture as its predecessors BCM2835 and BCM2836, so all those projects and tutorials which rely on the precise details of the Raspberry Pi hardware will continue to work. The 900MHz 32-bit quad-core ARM Cortex-A7 CPU complex has been replaced by a custom-hardened 1.2GHz 64-bit quad-core ARM Cortex-A53. Combining a 33% increase in clock speed with various architectural enhancements, this provides a 50-60% increase in performance in 32-bit mode versus Raspberry Pi 2, or roughly a factor of ten over the original Raspberry Pi.

James Adams spent the second half of 2015 designing a series of prototypes, incorporating BCM2837 alongside the BCM43438 wireless “combo” chip. He was able to fit the wireless functionality into very nearly the same form-factor as the Raspberry Pi 1 Model B+ and Raspberry Pi 2 Model B; the only change is to the position of the LEDs, which have moved to the other side of the SD card socket to make room for the antenna. Roger Thornton ran the extensive (and expensive) wireless conformance campaign, allowing us to launch in almost all countries simultaneously. Phil Elwell developed the wireless LAN and Bluetooth software.

All of the connectors are in the same place and have the same functionality, and the board can still be run from a 5V micro-USB power adapter. This time round, we’re recommending a 2.5A adapter if you want to connect power-hungry USB devices to the Raspberry Pi.

Raspberry Pi 3 is available to buy today from our partners element14 and RS Components, and other resellers. You’ll need a recent NOOBS or Raspbian image from our downloads page. At launch, we are using the same 32-bit Raspbian userland that we use on other Raspberry Pi devices; over the next few months we will investigate whether there is value in moving to 64-bit mode.

FAQS

We’ll keep updating this list over the next couple of days, but here are a few to get you started.

Are you discontinuing earlier Raspberry Pi models?

No. We have a lot of industrial customers who will want to stick with Raspberry Pi 1 or 2 for the time being. We’ll keep building these models for as long as there’s demand. Raspberry Pi 1 Model B+ and Raspberry Pi 2 Model B will continue to sell for $25 and $35 respectively.

What about Model A+?

Model A+ continues to be the $20 entry-level Raspberry Pi for the time being. We do expect to produce a Raspberry Pi 3 Model A, with the Model A+ form factor, during 2016.

What about the Compute Module?

We expect to introduce a BCM2837-based Compute Module 3 in the next few months. We’ll be demoing Compute Module 3 at our partners’ launch events this morning.

Are you still using VideoCore?

Yes. VideoCore IV 3D is the only publicly documented 3D graphics core for ARM-based SoCs, and we want to make Raspberry Pi more open over time, not less. BCM2837 runs most of the VideoCore IV subsystem at 400MHz and the 3D core at 300MHz (versus 250MHz for earlier devices).

Where does the “10x performance” figure come from?

10x is a typical figure for a multi-threaded CPU benchmark like SysBench. Real-world applications will see a performance increase of between 2.5x (for single-threaded applications) and >20x (for NEON-enabled video codecs).

Credits

A project like this requires a vast amount of focused work from a large team over an extended period. A partial list of those who made major direct contributions to the BCM2837 chip program, BCM43438 integration and Raspberry Pi 3 follows: Dinesh Abadi, James Adams, Cyrus Afghahi, Aravind Appajappa, Jeff Baer, Sayoni Banerjee, Jonathan Bell, Marc Bright, Srinath Byregowda, Cindy Cao, KK Chan, Nick Chase, Nils Christensson, Dom Cobley, Teodorico Del Rosario Jr, Phil Elwell, Shawn Guo, Gordon Hollingworth, Brand Hsieh, Andy Hulbert, Walter Kho, Gerard Khoo, Yung-Ching Lee, David Lewsey, Xizhe Li, Simon Long, Scott McGregor, James Mills, Alan Morgan, Kalevi Ratschunas, Paul Rolfe, Matt Rowley, Akshaye Sama, Saran Kumar Seethapathi, Serge Schneider, Shawn Shadburn, Noumaan Shah, Mike Stimson, Stuart Thomson, Roger Thornton, James Tong, James Turner, Luke Wren. If you’re not on this list and think you should be, please let me know, and accept my apologies.

The post Raspberry Pi 3 on sale now at $35 appeared first on Raspberry Pi.

02 Mar 03:54

eSIMs are over-hyped for consumer products

by Dean Bubley
The last couple of weeks - especially with MWC - have seen lots of noise around embedded SIMs (eSIMs). In particular, the GSMA announced its remote provisioning standard (link). 

While interesting and a step in the right direction, I think the industry is over-hyping the potential of eSIMs.

eSIMs are still physical SIMs, but they are built-into devices as fixed hardware components (basically an extra chip soldered-in), rather than as traditional removable cards. They can be remotely-programmed to support different operators' profiles, or switch between them.

This development gets around some of the more awkward practicalities of physical SIM cards in non-phone devices:
  • Physical space & design constraints needed for SIM slot & removable tray
  • Vulnerability to vibration and dust by having a tray/slot
  • Need to get SIM cards into the devices' normal distribution channels & retail stores
  • Potential need for user to source a SIM separately in a different purchase
  • Difficulty for user to swap operators (especially if device is locked to a particular network)
These were some of the problems which stopped widespread adoption of cellular radios and SIM cards in laptops, most tablets and other devices. (I wrote about this a lot in 2006-2008, eg here & here)

In that sense, eSIM is definitely a step forward. More use-cases become practical for cellular connectivity, just at the time when M2M/IoT is finally taking off. However, it would be wrong to assume this means that 4G-connected consumer devices will become the norm. While some categories (eg cars) are widely adopting cellular radios, others (eg wearables, home electrical appliances) are not.

The problem? Cost.

A 4G radio module and a SIM/eSIM remains a significant extra component on the per-unit BoM (bill of materials) cost for a manufacturer, plus the costs of extra design, engineering and testing in creating a cellular version of a product, amortised over the volume sold.

Today, normal 4G modules for devices cost perhaps $20-30, with SIM, battery & other components added to that. 

New versions of LTE are being designed to reduce costs, by cutting down some of the functions of "full" LTE. The target price for a new Cat-1 LTE module, optimised for M2M, is about $15. It's reasonable to imagine that Cat-1 (or Cat-M, its successor) will get to the $10 range over the next couple of years.

In parallel to this, when the new 3GPP NB-IoT low-power standard starts to ship (maybe mid-late 2017, being optimistic) the price should be more like $5-10, with an intention (I'm guessing 2018) to get that below $5. Add on an extra amount for the eSIM licence and design/test costs - probably a few dollars more. Then add on whatever is needed in terms of extra battery, software and so forth.

In other words even in two years' time, adding cellular to a consumer device will still cost the manufacturer at least $10 and perhaps $20 depending on the power/transmit speed needed, number of frequency bands, fallback to 3G, voice support and so on. While that's better than today, it's still significant for a manufacturer to wear.

Now $10 does not seem like much - or even $30 - until you consider the underlying costs of the devices they're supposed to be built into.

  • A $20,000 car might have a 13% gross margin, or $2600, before amortising the R&D and sales & marketing costs
  • A high-end laptop might have a $100/unit gross margin, and a low-end one maybe $30
  • FitBit (the largest wearables company) makes 46% margin on $87 average selling price, so about $40/unit
  • A $300 washing machine might have 17% margin, so $50/unit
  • A $30 toaster probably has a $5 profit margin
So in other words, adding a cellular module now, and also in the mid-term future, is a large % of gross margin for most consumer devices, irrespective of whether it uses SIM or eSIM.

Nobody is going to add a $10 extra cost to a toaster which has only a $5 margin, unless they can charge an extra $10 (or preferably $20) for it. And if only perhaps 10% of people actually (a) care enough to want a connected toaster, and (b) are willing to pay the extra cash upfront, then the product will become uncompetitive. Instead, the manufacturer could make two versions - normal & connected - sold at different prices. But that adds complexity in manufacturing, adds inventory costs, and there's no guarantee that retailers will stock both anyway. 

There's also no realistic way for cellular operators to subsidise the new mToasters down to the normal price, unless they sell them in their own stores, or find a way to reward the manufacturers with a sign-up bounty or rev-share once they get activated.

Result - the cellular Connected Toaster market is a non-starter, unless someone works out a way to print adverts on toast in shades of brown, and creates a new business model. And even then, you could probably do it more cheaply and easily with a WiFi Toaster.

Now obviously that's an extreme example - but it is designed to make the point that if (radio module+SIM) is a big % of the underlying device gross margin, and take-up rate is likely to be low, then the concept is not viable. In particular, if a device doesn't already come in (successful and well-used) WiFi-connected versions, it is unlikely to succeed in cellular variants, unless it has wheels or legs, plus high margin and a possible new revenue stream.

A $2000 specialist mountain bike might get a cellular radio built-in. A $20 bike sold in a developing country will not. 

In other words, while eSIM is a helpful advance for some types of connected IoT device, it's not a huge game-changer which will mean every home appliance, and every wearable product, will adopt 4G. Where it is realistic is in categories such as:
  • Expensive items such as cars, which can wear the extra BoM cost of the radio module and SIM/eSIM easily, and which may be re-couped by extra revenue streams to the manufacturer such as warranty sales
  • New categories of devices which need always-on wide area connectivity to function - eg "lost and found" tags or wearables for wayward pets and children, or realtime heart-rate monitors with emergency alert capability for cardiac patients (notwithstanding insurance liability costs).
  • Special "connected" premium-priced versions of products that normally just rely on WiFi or other short-range wireless (eg most wearables)
  • Separately-sold accessories, eg aftermarket security "trackers" for bicycles
  • Existing SIM-connected devices where the physical SIM creates extra complexities (eg some tablets, some industrial machiner - and maybe, finally, mainstream laptops etc)
This also means that the vast bulk of upcoming IoT devices (the quasi-mythical 10bn, 20bn, 50bn figures) will not support cellular, certainly by 2020, and perhaps even by 2025, unless 5G module prices get below $1, which seems unlikely. The majority will either use WiFi, cheaper (& non-SIM) LPWAN technologies, or maybe aggregated locally via a cellular gateway.

Cellular is definitely a player in IoT, but it will certainly not be ubiquitous, eSIM or not. There needs to be a specific reason and use-case for its inclusion - it is too expensive to be added in by a manufacturer just as an extra feature, except on very expensive/profitable products.

Disruptive Analysis has conducted research projects & internal advisory workshops on SIMs/eSIMs for tier-1 mobile operators, vendors and investors in the recent past, as well as writing about the Apple and Google SIMs. In addition, I've written about IoT Networking & LPWAN technologies for STL Research, as part of its Future of the Network research stream. (details here) Please contact information AT disruptive-analysis DOT com for more details.
02 Mar 03:53

Raspberry Pi 3 on sale now at $35

by Rui Carmo
Click on the image to zoom in

The new CPU makes it very attractive (even more so for me than the on-board Wi-Fi), but 1GB is too little RAM to go with it. Let’s wait for the next one.


02 Mar 03:53

Firefox OS will Power New Line-up of Panasonic Ultra HD TVs

by Mozilla

Panasonic announced today that Firefox OS will power the new Panasonic DX-series UHD TVs.

Panasonic TVs powered by Firefox OS are already available globally. These TVs have intuitive and customizable home screens which give you “quick access” to Live TV, Apps and personal connected devices. You can access your favorite channels, apps, videos, websites and content quickly – and you can also pin any app or content to your TV home screen.

What’s New in Firefox OS For TVs

Panasonic plans to update the DX-series UHD TVs, first announced in Europe, with a new version of Firefox OS later this year. This update will give you a new way to discover Web Apps and save them to your TV. Firefox OS will feature Web Apps with curated Web content optimized for TV, such as games, news, video on demand, weather and more. You will also get an easy “click to watch” content discovery experience with no installation necessary.

Panasonic’s DX-series UHD TVs powered by Firefox OS will also get new features that provide a seamless Firefox experience across multiple platforms. A new “send to TV” feature will allow you to easily share Web content from Firefox for Android to a Firefox OS-powered TV.

Mozilla and Panasonic have been collaborating since 2014 to provide consumers with intuitive, optimized user experiences and allow them to enjoy the benefits of the Open Web platform.

For more information:

02 Mar 03:52

Star Trek invades our timeline

by Kristina Chodorow

I was at Kennedy Space Center yesterday and they have an exhibit with all of the Apollo mission flags. Having mission flags is a great idea, more software launches should have flags, too. I noticed one in particular:

apollo_flag

(Please excuse the poor image quality, I have a technology-defying ability to take crappy photographs.)

Those symbols on the flag look vaguely familiar…

apollo_rotated

Compare to:

starfleet_insignia

What’s hilarious to me is that Star Trek: The Original Series started airing in 1966. Apollo 15 didn’t launch until 1971, so it must have been pretty blatant that they were copying that. I couldn’t find anything about it with a brief Google search.

Another exhibit about the space shuttle confirmed the intermingling between NASA and Star Trek:

enterprise_shuttle

Live long and prosper!

02 Mar 03:50

Michael Alexander: On view corridors and role conflicts

by pricetags

Price Tags regular Michael Alexander and I had a conversation about this Sun article by Jeff Lee:Prominent architects slam Vancouver’s planning and development.”

Vancouver’s planning and architecture have suffered over the long term from a lack of foresight by City Hall and heavy-handed management, three of the city’s prominent architects said Friday.

From a singular focus on protecting view corridors and restricting height limits downtown to the departure of experienced staff unhappy with the consolidation of power in the city manager’s office, Vancouver’s architectural and planning direction has drifted off course, said James Cheng, Richard Henriquez and Joost Bakker at an Urban Development Institute luncheon. …

The trio also weighed in on the city’s long-term policy of protecting downtown view corridors from the incursion of tall buildings. Cheng said in theory it may have been a good idea, but needs to be reviewed.

Henriquez was more direct. Such policies have had a negative effect on livability downtown and have deprived the city of “billions” of dollars in development cost charges that could be used for social good, he said.

“When you are increasing density and you are not increasing the height, you are curtailing the possibility of open spaces. It compromises the livability of the downtown by not having open space and having buildings closer together without enough privacy,” he said. “There is tremendous value in the space up top. If you rezone something overheight, you can use the development cost charges for social housing or whatever. There are billions, billions of dollars at issue.”

.
Says Michael:
 .

I’m not surprised to hear Richard Henriquez wanting to get rid of view corridors, as I’ve heard him make that argument before. I take your point that allowing it by law for a great architect’s towering masterpiece then allows greater height for the dross that others design. The alternative is the widely feared spot zoning. I also take your point that a view corridor from Queen Elizabeth Park may be unnecessarily restrictive.

But fundamentally, I think that Vancouver’s peninsula is defined by its relation to, and views of, mountains and water. The other day I was going south on Commercial Drive, and when I began my return trip north, I was so stunned by the view of the North Shore mountains that it felt like the first time I arrived here. Lose that, and Vancouver looks like every North American downtown. Evidence? Just look at the photo illustrating Jeff Lee’s story. Omaha? Kansas City?

But the part of this Vancouver Sun article that has me scratching my head is this:

The architects’ comments came on the same day the city finally made public a plan to divide in two the traditional role of planning director and manager of development services.

For all of its history Vancouver has relied upon the director of planning to also be responsible for managing development. But the decision to split the roles is an acknowledgment that they are now mutually exclusive and contain an inherent conflict, with daily development pressures impeding long-term city planning.

Perhaps I’ve misunderstood, but I was under the impression that the city has always separated those jobs, and only combined them when Brian Jackson was appointed. I thought Brent Toderian was Director of Planning, period. Wikipedia lists Larry Beasley as Co-Director of Planning (with Anne McAfee). Ray Spaxman’s website says he was Director of Planning and, among other things, Chair of the Development Permit Board. Nothing about being Director or Manager of Development Services.

I thought only Brian Jackson held both jobs. If that’s true, then the city is going back to its original system of separating the jobs.

Who’s right?


02 Mar 03:48

Chris Rock’s Oscars monologue: to create change, use humor

by Josh Bernoff

There is no better way to tell the hard truth than with a joke. First, the audience laughs. Then they think “why is this funny?” Then they get your point. Then they change. That’s why Chris Rock’s Oscars monologue was so telling, while Motion Picture Academy President Cheryl Boone Isaacs’ apology was so pathetic. Humor … Continue reading Chris Rock’s Oscars monologue: to create change, use humor →

The post Chris Rock’s Oscars monologue: to create change, use humor appeared first on without bullshit.

02 Mar 03:48

Congratulations to Nathan Pachal

by Ken Ohrn

Mr. Pachal has been elected as Councillor of the Langley City Council.  Those who recall his time as Price Tags’ guest editor will appreciate his interest in transit, transit-oriented development, urban design and sustainability.

Nathan.Pachal

 

 


02 Mar 03:47

The Linguistics of Mass Persuasion Part 2: Choose Your Own Adventure

files/images/LinguisticsMassPersuasion_1050x700.jpg


Chi Luu, JSTOR Daily, Mar 03, 2016


It's always appropriate to restate these points: "In the realm of political  persuasion, sophisticated language use can be very effective in swaying an audience. We are  encouraged  to 'choose' out of a limited set of choices, to fill in obvious information, to resolve the cliffhanger in an already fully-framed narrative— all without necessarily being aware of it. As we engage with politics, it’ s important to remember how powerful words can ultimately be, and how easily we can be persuaded by them." What's important is that it's not only in politics where word selection is used to sway or limit choices. Advertisers work with this all the time, and so do educators.

[Link] [Comment]
02 Mar 03:47

Samsung Galaxy Tab A (2016) and Galaxy Tab E 7.0 tablets revealed in leaked images

by Evan Selleck
In April of last year, Samsung showed off the A-series of Galaxy Tab tablets, and now it looks like a new version, plus a new letter member, is on the way. Continue reading →
02 Mar 03:47

Samsung Galaxy S7 vs. Galaxy S7 edge: What’s the difference?

by Rajesh Pandey
Compared to last year, the difference between the Samsung Galaxy S7 and its edge variant are more pronounced this year. The only key difference between the Galaxy S6 and Galaxy S6 edge last year, was that the latter had a curved screen, a 50mAh higher capacity battery, and Edge UX. Continue reading →
02 Mar 03:47

More Observations on the ONS JSON Feeds – Returning Bulletin Text as Data

by Tony Hirst

Whilst starting to sketch out some python functions for grabbing the JSON data feeds from the new ONS website, I also started wondering how I might be able to make use of them in a simple slackbot that could provide a crude conversational interface to some of the ONS stats.

(To this end, it would also be handy to see some ONS search logs to see what sort of things folk search – and how they phrase their searches…)

One of the ways of using the data is as the basis for some simple data2text scripts, that can report the outcomes of some simple canned analyses of the data (comparing the latest figures with those from the previous month, or a year ago, for example). But the ONS also produce commentary on various statistics for via their statistical bulletins – and it seems that these, too, are available in JSON form simply by adding /data to the end of the IRL path as before:

UK_Labour_Market_-_Office_for_National_Statistics

One thing to note is that whist the HTML view of bulletins can include a name element to focus the page on a particular element:

http://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/bulletins/uklabourmarket/february2016/#comparison-between-unemployment-and-the-claimant-count

the name attribute switch doesn’t work to filter the JSON output to that element (though it would be easy enough to script a JSON handler to return that focus) so there’s no point adding it to the JSON feed URL:

http://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/bulletins/uklabourmarket/february2016/data

One other thing to note about the JSON feed is that it contains cross-linked elements for items such as charts and tables. If you look closely at the above screenshot, you’ll see it contains a reference to an ons-table.

...
sections: [
...
{
title: "Summary of latest labour market statistics",
markdown: "Table A shows the latest estimates, for October to December 2015, for employment, unemployment and economic inactivity. It shows how these estimates compare with the previous quarter (July to September 2015) and the previous year (October to December 2014). Comparing October to December 2015 with July to September 2015 provides the most robust short-term comparison. Making comparisons with earlier data at Section (ii) has more information. <ons-table path="cea716cc" /> Figure A shows a more detailed breakdown of the labour market for October to December 2015. <ons-image path="718d6bbc" />"
},
...
]
...

This resource is then described in detail elsewhere in the data feed linked by the same ID value:

www_ons_gov_uk_employmentandlabourmarket_peopleinwork_employmentandemployeetypes_bulletins_uklabourmarket_february2016_data_comparison-between-unemployment-and-the-claimant-count

...
tables: [
{
title: "Table A: Summary of UK labour market statistics for October to December 2015, seasonally adjusted",
filename: "cea716cc",
uri: "/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/bulletins/uklabourmarket/february2016/cea716cc"
}
],
...

Images are identified via the ons-image tag, charts via the ons-chart tag, and so on.

So now I’m thinking – maybe this is the place to start thinking about a simple conversational UI? Something that can handle simple references into different parts of a bulletin, and return the ONS text as the response?


02 Mar 03:46

Book Review – Inside The Machine

by Martin

inside-the-machineBack in 2013 I published a post on the different levels on which a computing device could be understood from a working level down to the physics behind the individual gates. One facet that has been missing for me so far was a look at the features that make today’s CPUs so powerful. “Inside the Machine” by John Stokes filled that gap for me.

“Inside the machine” is already 10 years old, written back in 2006 and hence not quite up to date anymore so don’t expect any deep insights into the current 6th generation of the Intel Core architecture. But for me that didn’t matter much as it still includes the “Core 2 Duo” architecture from which the current i3, i5 and i7 designs derive.

In his book John discusses the major building blocks and ideas behind a modern CPU to increase performance. Superscalarity, multi-CPU architecture, pipelining the fetch, decode, execute and write back phases of an instruction, microcode, fusion, lamination, out of order execution, simultaneous execution, branch prediction, caches, cache lines, associative caches, write policies, front-end, back-end, register renaming, vector operations, instructions for virtual machine operations, x86 vs. x86_64, etc. etc. etc.

10 years ago when the book was written Apple was still using PowerPC processors and was just about to transition to x86 CPUs. The book thus not only describes the evolution of x86 but also that of the PowerPC architecture and makes frequent comparisons between them. After reading the book it becomes quite clear why Apple abandoned the PowerPC CPU at some point in favor of the x86 architecture. Quite fascinating also from a historical point of view.

While I can fully recommend the book there is one important point readers should be aware about: As the book explains the concepts mentioned above it has to remain at a fairly high level of how things such as the control unit, the arithmetic logic unit (ALU) and many other components work on the inside. In other words, there’s no discussion about transistors, logical gates, binary adders, counters, flip-flops and other things that make a CPU tick. For that I can fully recommend another book called “But How Do It Know“. I would say there is no overlap between the two books and my advice thus is to read “But How Do It Know” first before giving “Inside The Machine” a go.

02 Mar 03:46

recSmart looks to make ‘the first connected dashcam’ app-based and social

by Ted Kritsonis

Dashcams have been a thing for a while, and there appears to be no shortage of clips on the Internet showing the humourous and dangerous sides of driving around the world. France-based RoadEyes, a startup with an ongoing Kickstarter campaign, is looking to make it a fun experience through live streaming and social media with its recSmart dashcam.

Calling it “the first connected and social dashcam,” the product was showcased at Mobile World Congress in Barcelona last week. At first glance, it looks like any typical dashcam. The difference is that the recSmart includes both GPS and Wi-Fi connectivity, as well as a microSD card slot and a 140-degree wide-angle lens to give users a fully connected device.

Recording in 1080p, content is stored directly onto the microSD card, which has a built-in loop that overwrites older footage once the card’s limit is reached. Everything is time-stamped and geo-tagged. Moreover, a built-in gyroscope keeps the camera stable on rougher roads or during hard braking.

56a9a377-84fe-448f-94ef-95904db7b470

The premise is simple in the sense that the device captures footage that can then be shared with friends through different channels, including through the dashcam’s own dedicated iOS and Android app. As it lacks its own SIM, the recSmart needs to connect to a phone via Wi-Fi to share or stream the footage it captures. The shared data not only includes video and still images, but also location, speed, time and distance.

This also makes the recSmart configurable from a phone, which may not make sense while a driver is supposed to be focused on the road, but could certainly apply to a passenger who wants to interact with the unit. Without really testing it, it’s hard to gauge how on-the-fly interaction with the app works.

To simplify things, the dashcam comes with a wireless remote button that can be placed on the steering wheel, instrument cluster or somewhere on the dash to trigger photos or video recording. Pressing it takes a photo with five seconds of video before and after the shot — a feature that sounds a lot like Apple’s Live Photos — and sends it to the phone. The remote connects directly to the camera, not the app, so it serves a singular purpose that should theoretically negate having to use the phone to shoot anything.

Still, the app is a central piece that figures prominently in what the recSmart does because it is designed to be a community unto itself. “On the Road” is a live news feed that sorts content by trending, recent and nearby. Users can view, comment, save and share their content this way, and even add music and filters for greater effect.

38dc57d5-93e8-4177-b8f0-7f17bbf0d4c6

Beyond that, all the usual social media suspects are going to be supported, including Facebook, Twitter, Instagram, SnapChat, WhatsApp and YouTube, among others.

RoadEyes has also designed the recSmart to be fairly portable, meaning that it doesn’t have to be stuck to its mount to do its job. The built-in 260mAh battery isn’t particularly sizeable, but the dashcam can be removed and used to shoot stills or video when off its base. The company positions this as a good way to capture any evidence for collisions, insurance claims and court evidence, but could just as easily be used for in-car selfies or whatever other use cases one can find for it.

The dashcam category is already pretty crowded as it is, so it remains to be seen if there’s a consumer appetite for adding a social component to it. Live streaming while in an exotic locale sounds like an intriguing scenario, assuming that data is cheap enough to make it feasible. Either way, more and more collisions and examples of idiotic driving are likely to be shared through a platform like this.

The current crowdfunding campaign is running until Mar. 20, with the early bird price set at 99 Euros (about $150 CDN), moving on to 199 Euros thereafter. The company will ship to Canada and is expecting to deliver the first 1,000 units in May.

29 Feb 01:51

Most software already has a “golden key” backdoor—it’s ...

mkalus shared this story from Fefes Blog.

Most software already has a “golden key” backdoor—it’s called auto update

Darauf haben damals, als Chrome mit dem Auto-Update-Kram anfing, einige Paranoide hingewiesen. Heute erinnert sich da kaum noch jemand dran.

Schön formuliert:

So when Apple says the FBI is trying to "force us to build a backdoor into our products," what they are really saying is that the FBI is trying to force them to use a backdoor which already exists in their products. (The fact that the FBI is also asking them to write new software is not as relevant, because they could pay somebody else to do that. The thing that Apple can provide which nobody else can is the signature.)
Ich ärgere mich ja inzwischen mehr darüber, dass Firefox keine Diffs für ihre Quellcode-Tarballs anbietet, aber für die Binaries irgendwelche schrottige Bindelta-Technologie haben. Die wollen schlicht gar nicht mehr, dass jemand den Quellcode anguckt oder selber aus den Quellen seine Pakete baut. Das ist nur noch ein PR-Argument, kein inhaltliches mehr.
29 Feb 01:51

WhatsApp to discontinue support for BlackBerry 10, Nokia phones by the end of 2016

by Igor Bonifacic

WhatsApp will discontinue support for BlackBerry-made operating systems, including BlackBerry 10, by the end of 2016, the Facebook-owned company announced late Friday afternoon.

Citing the shifting mobile landscape, the company says it wants to focus its efforts on operating systems the majority of mobile phone owners use.

“When we started WhatsApp in 2009, people’s use of mobile devices looked very different from today. The Apple App Store was only a few months old. About 70 percent of smartphones sold at the time had operating systems offered by BlackBerry and Nokia,” says the blog post accompanying the announcement. “Mobile operating systems offered by Google, Apple and Microsoft – which account for 99.5 percent of sales today – were on less than 25 percent of mobile devices sold at the time.”

WhatsApp will also discontinue support for Windows Phone 7.1, Nokia’s S40 and S60 operating systems, as well as Android 2.1 and 2.2 by the end of 2016.

Somewhere, the 0.1 percent of Android users who thought they would never have to upgrade from Froyo are cursing the inevitable march of time.

SourceWhatsApp
29 Feb 01:27

Lightroom, Mobile, Nexus

In which I report on using the Nexus 5X in RAW mode, with the help of Adobe Lightroom, and on workflows for mobile photogs. With illustrations from Vancouver’s Lighthouse Park.

Backgrounder on RAW

(Skip to the next section if you know all this stuff.) A “RAW” picture is supposed to be a bit-for-bit reproduction of exactly what the sensor in your camera saw. RAW pictures usually take up lots of memory, and doing a good job of presenting them on your screen often requires inside knowledge of the quirks of the camera and its sensor. There are a bunch of different RAW formats, but the industry seems to be converging on DNG, which is proprietary but still reasonably open and apparently technically sound.

The images you see on your screen are mostly not RAW, but JPEG or PNG format, the result of taking the RAW bits and producing a compressed, color-corrected, standardized format that’s easy for software to display.

Back in the day, digital cameras produced only JPEGs; anything serious, these days, also produces RAW. Until very recently, phone-cams were all-JPEG-all-the-time.

The reason photographers like to work with RAW pix is that they contain lots more information, so there’s a lot more scope for correcting color or exposure problems.

Finally, there is no good reason why the perfectly good English word “raw” needs to be rendered as RAW, it just is. I was arguing this on Twitter with David Heinemeier Hansson and he suggested that the caps made it look like an acronym so they’d feel comfy in sentences with other acronyms like JPG and GIF and so on.

Mobile RAW is a thing

Let me show you; here’s a shot toward Bowen Island, the way the bits came out of the camera; then, as tweaked via Adobe Lightroom.

Unimproved Nexus 5X photo Improved Nexus 5X photo

Camera: Nexus 5X, 1/1250sec, f2.0, ISO 60.
Adobe Lightroom Android app camera. Second image processed with Adobe Lightroom CC.

The biggest virtue of RAW is exactly this kind of thing: Pulling out bits of the photo that are over-exposed and under-exposed, before they get compressed away into the JPEG. Here’s another:

Unimproved Nexus 5X photo Improved Nexus 5X photo

To be fair, the 5X’s images don’t have the immense data-richness that the Fuji cameras’ do, with oceans of rich detail hiding in the shadows and glare, begging to be pulled out. But I’m pretty sure that if I’d been in JPEG-only land, I would have just deleted both these photos.

Workflow 1: RAW capture

First, you have to convince your camera to take pix in RAW. The last couple of releases of Android have included the API you need, and there are a bunch of camera apps on Google Play that support this.

It turns out that one of them is a camera app that’s unobtrusively embedded in the Android Lightroom app. So that’s what I used here. It’s an OK camera; I wouldn’t say the ergonomics are dramatically better than the one that comes with Android, but it does have a little button with “RAW” written on it. It has nice tilt/level adjustment. In these pictures I basically took all the defaults.

Workflow 2: Edit on the phone?

That Lightroom app can edit photos as well as take them. The idea is, you’ve taken a shot and you’re burning up to Instagram it, but some dork photobombed his selfie stick into the top left corner, so you need to tidy first.

Lightroom Android editor

The browsing presentation is kind of appealing. But the dates are wrong; the ones it claims were shot on the 25th were really on the 27th. Hmmmm…

Lightroom Android editor

On a computer, the Lightroom tilt/crop control is brilliant. It’s a little klunky on the phone, too eager about snapping to the edge. I had problems nipping off just the edge of the sunomono bowl on the left for symmetry with the plate on the right.

That picture came out quite OK; given enough light, the 5X has an appetite for detail. I’ve totally got out of the color-correction habit while shooting Fuji because the X-cams just get it right. The 5X pix often need help; but Lightroom is good at helping.

Sushi

Since I only do light editing on the phone, I don’t have an in-depth opinion about the Lightroom editor vs the one Google ships. But Lightroom’s can handle RAW photos, which is what I mostly plan to be taking.

Workflow — Networking stuff

Up until now, I’d set up DropBox to auto-upload photos on WiFi, then I have a little script that copies them into a handy non-DropBox directory for easy import to Lightroom. Works just fine, if you don’t mind typing a shell command.

When I installed the Lightroom-mobile app, I noticed that it had an option for adding photos to Lightroom, so I turned that on.

When I got back from the park, I waited a few minutes for the phone to upload, then went looking for the photos. Took me the longest time to find them; the Lightroom “Filmstrip” thingie across the bottom has a black toolbar where you can select folders and things, and one of the things you can select is “All Synced Photographs”. That’s them.

You can drag them from there to any other folder, but then they’re still in “All Synced”. And if you edit them, then go back to the version on your phone, that version is edited too. I guess that’s cool if you want to show off your pix on your phone. It also means, I suppose, that they’re really in Adobe’s “Creative Cloud”. Now I’m worried, because the Nexus 5X RAW files are 25M apiece — remember to turn that “Sync only on WiFi” option — and I apparently I have a mere 20G of space there.

But wait — when I follow Adobe’s instructions, they seem to be telling me that I have zero bytes in the cloud. So I’m kind of baffled.

Its still not obvious to me whether I’m going to be happier with the Adobe or DropBox workflows.

But like I said above, mobile RAW pictures are totally a thing.

29 Feb 01:26

Why I'm Running for Drupal Association Re-election

by matthew

Vote in the Drupal Association ElectionsThis is the first in a series of posts I'm going to write leading up to the Drupal Association elections. I had a funny chat with a supporter a day or two ago. It went something like this:

So you thought, "I have too much free time. I need to become a board member." LOL You are a more patient person than me, good sir. You have my vote.

The truth is, I don't find it a position where I need a ton of patience any more than a coder needs a ton of patience when putting together a module, or an information architect when creating wireframes, or a sales person making calls to drum up business. I believe in service, contribution, and the greater good. We should all give where our strengths lie. I've spent most of my professional life managing software projects, managing non-profits, and engaging in good governance with and on non-profit boards.

Drupal has supported my family for the better part of a decade. It is deeply rooted in my identity now. Some of my best friends, supporters, and colleagues have come from this community. You have supported me, and I want to continue to support you.

While I started experimenting with Drupal in 2006 it really became the way I've earn my living in 2007. When I started participating in the community, first by attending meetups but later by making presentations, evangelizing, contributing a bit to documentation, engaging in governance discussions for the community, and helping organize events, what I gave came back double. This deepened my commitment to the community as a whole. When I've had challenges, suggestions and help are only as far away as an IRC client, skype, or slack.

What have I done in service to the community?

  • Member of the Drupal Association
  • Helped organize and run Meetups in Boulder from 2007 through 2009
  • Helped with documentation
  • Contributed to discussions on governance and marketing
  • Was on the local committee for Drupalcon Denver and became a Volunteer Wrangler and the Customer Service Wrangler
  • Helped organize and volunteered at Drupalcamp Colorado - starting with the second camp
  • Keynoted at Drupalcamp Austin
  • Presented at Camps, Cons, and Meetups
  • Blogged on the business of Drupal
  • Evangelized the adoption of Drupal
  • Two years on the Drupal Association Board

Re-election would allow me to continue to bring to bear the many years of experience I have with non-profits and with boards. I want to continue and finish up the work I've started.

I hope that you plan on voting in the elections and that you'll consider me as someone that can represent your interests. Please tweet, Facebook, and Google+ these posts. I need as much help as I can get to spread the word! I also need your vote.

My next blog post will talk about my experience and how they would inform my continued efforts as a board member of the DA.

If you would like me to remind you to vote come election day, please signup here: http://eepurl.com/bQ7P5T

 


Photo by the League of Women Voters of California under a Creative Commons License.
  • del.icio.us
  • Digg
  • Facebook
  • Google
  • LinkedIn
  • Ping This!
  • StumbleUpon
  • Twitter
29 Feb 01:26

Randall James Robinson, “Rainwater” mask

by Bruce Byfield

I always appreciate recognizing talent before anybody else. What interests me is not so much the potential for a piece done early in an artist’s career to increase in value (since I never sell what I buy) so much as the satisfaction of recognizing talent before anyone else. So when Kelly Robinson, one of my favorite Northwest Coast artists, told me in December that he was teaching his brother Randall to carve, I was immediately interested in the results. And, given his selection of materials and the finish on “Rainwater,” in Randall James Robinson’s case I am already experiencing that satisfaction in the reactions of those who see the mask.

“Rainwater” is one of Robinson’s first masks. The carving is relatively simple, but a good choice for the material. The mask is carved from spalted alder – that is, alder infected with a fungus that discolors the wood. The discoloration apparently does not photograph well, and is actually much smoother-looking than it appears to be in the photo below, but the point is that the spalting is so interesting in itself that too-elaborate carving would be a distraction, especially since the spalting’s long lines of discoloration suggests long trails of rain running down the mask.

Robinson tells me that he got the wood from Gordon Dick, the carver and owner of the Ahtsik Gallery near Port Alberni, who produced the spalting, but found that it set off allergies when he tried to carve it.

Robinson is carving in the Nuxalk style. The Nuxalk have traditions that are vastly different from those of the northern first nations, such as the Haida, Nishga’a,Tsimsian, Tahltan. If I understand correctly, one of the major Nuxalk ceremonies is the thunder dance, which celebrates “the greatest of the supernatural beings in Nuxalk culture.” The thunder dance tells of four brothers’ encounter with the spirit of thunder on a lonely hillside, and is apparently the origin story of a major Nuxalk family.

I have seen the thunder dance performed several times by Latham Mack, who has carved a couple of thunder masks. However, I have never seen the rain-water dance, which is performed before the thunder dance. During the rainwater dance, the dancers sprinkle those watching with water as cleansing ritual. “It’s the bringer of rain before the thunder,” Robinson tells me, meant “to cleanse the earth before thunder.”

Since the entire coast is a rain forest from the American border to Prince Rupert and beyond into Alaska, a rain spirit seems only appropriate to a local culture. In the same way, “Rainwater”’s use of spalting to portray that rain spirit is a choice that speaks well of Robinson’s developing artistic sensibilities. Like any newcomer, Robinson has endless hard work and learning ahead of him in order to have an artistic career, but this early effort suggests that he has the talent to succeed if he chooses.

rr


29 Feb 01:25

Growth Is Hard

by jbat

The post Growth Is Hard appeared first on John Battelle's Search Blog.

Zuckerberg1The business story of the decade is one of insurgency: Every sector of our economy has spawned a cohort of software-driven companies “moving fast and breaking things,” “asking for forgiveness, not permission,” and “blitzscaling” their way to “eating the world.” For years we’ve collectively marveled as new kinds of companies have stormed traditional markets, garnering winner-take-all valuations and delivering extraordinary growth in customers, top line revenue, and private valuations.

But what happens when the insurgents hit headwinds? In the past year or so, we’ve begun to find out. The unicorn class has had its collective mane shorn. A quick spin through the “unicorn leaderboard” finds a cohort strewn with cautionary tales: Uber’s under continual attack by regulators and increasingly well funded competitors. Square and Box, both of which managed tepid public debuts, have consistently traded below their private valuations. Airbnb, SnapChat, DropBox, and many others have been marked down by their largest investors. And of course, there’s the cautionary tale of Zenefits.

While this news has evinced a self-congratulatory whiff of schadenfreude throughout the tech press, I think the reckoning is more fundamental in nature. The hardest part of running a company, it turns out, is actually running a company. Put another way: Growth can be bought, but growing up has to be earned.

Take Uber, for example. Once a poster child for a culture of “ask for forgiveness, not permission,” Uber is now taking a more traditional approach to new markets, meeting (and working with) local regulators, hiring seasoned pros, and learning how to play politics just like any other big company. It even gave itself a new grown-up “haircut.”

Young companies built on venture-fueled cultures of grow-at-all costs are facing a shift: The essential truth of business is that you must create value for all constituents — your employees, your customers, and your community. That last part may sound squishy, but it’s the soft stuff that can kill you. Learning to become a respected corporate citizen isn’t on the minds of founders and executives chasing top line revenue and lofty private valuations. But at some point, it will be. I think that moment is nigh.

Facebook is a good example. In 2014 it changed its internal motto from “Move Fast and Break Things” to “Move Fast With Stable Infrastructure.” Among companies I like to call “established insurgents,” Facebook stands out for mindfully transitioning from a culture of youthful arrogance to one of continual learning and partnership. I spend a fair amount of time talking to senior folks at large, established “BigCos,” the kinds of companies that spend hundreds of millions of dollars on platforms like Facebook and Google. Nearly all of them have commented on Facebook’s maturing culture, noting in particular that the company genuinely wants to learn from its big company partners.

What could Facebook possibly learn from a Nestle, P&G, or a Ford, you might ask? We sometimes forget that the current crop of unicorns aren’t the first group of companies to transition from roaring startup culture to Blue Chip incumbency, or from regulatory outlaw to lobbying insider.

Yes, the insurgents threaten the incumbents’ very existence. But they face their own existential questions as well: “How do I build a company that will last for generations? How can I maintain a strong corporate culture now that I have thousands of employees? How do I work productively with regulatory and policy frameworks, now that I’m an established player?”

Turns out, BigCos have decades of experience with exactly those kinds of questions. Over the next few years, I predict the two sides will increasingly engage in a dialog about how each might learn from the other. In many cases, the dialog has already spawned partnerships: Whole Foods and Instacart, GM and Lyft, Earnest and New York Life. In the end, the established insurgents and the BigCo incumbents have more to gain from working with each other than they do by tearing each other down. In the process, we might just re-invent the very nature of business itself — taking the best of both sides, and abandoning the worst. Let’s get on with it, shall we?

The post Growth Is Hard appeared first on John Battelle's Search Blog.

29 Feb 01:25

1-In-5 Of 2016’s New Tech Billionaires Are Chinese Women

by Cate Cadell

2015 was a tough year for billionaires who made their mint in metals and mining, but a much better year to be filthy rich in tech, according to the annual Hurun Rich List report, which ranks the world’s most wealthy people.

Interestingly, 23 percent of new tech billionaires added to the 2016 list are Chinese women. In fact, the nine new additions make up 100 percent of new global female tech billionaires.

Despite a 7 percent drop in their currency, China’s billionaires have become much richer and much more numerous in the past year. The technology, media and telecommunications (TMT) industry made up half of the main source of wealth for billionaires as of February 2016, whereas metal and mining billionaires saw their wealth drop.

The new female Chinese tech billionaires include Li Qiong, who is a partner with her husband in Beijing Kulun Tech, which recently bought a majority of US gay dating app Grindr, and then entered a bid for Norwegian-based mobile browser Opera. Other women among the nine new tech billionaires hail from companies including Rapoo, VenusTech, Qtone Education and United Electronics.

So Why Are 2016’s New Female Tech Billionaires Exclusively Chinese?

It’s at least partially related to an overall rise in Chinese billionaires. No other country added to the rich list at anywhere near the rate of China, with the combined wealth of their billionaires roughly equaling the GDP of Australia.

The country saw 90 new billionaires in the past year, taking their total to 568. Conversely, the US lost two billionaires, falling below China with just 535. Countries dependent on resources also fell in the rankings, including Russia, Canada and Australia who dropped 13, four and two billionaires respectively.

Aside from a general rise in Chinese billionaires, China also leads the world in new young billionaires (under 40), which could explain why more female tech names are finding their way onto the list. Women accounted for around 16 percent of the total Chinese people listed, while in the under-40 age category that number went up to 21 percent of the total. The country also accounts for 75 percent of the world’s ‘self-made’ female billionaires, according to the report.

Together, China and US made up half of the total list. Chinese billionaires grew 80% since 2013 and Beijing is the billionaire capital for the first year ever, adding 32 to make an even 100 billionaires, beating out New York’s 95.

28 Feb 16:38

Book: The Einstein Theory of Relativity

by Thejesh GN

3591 The quote that “there are only twelve people in this world who understand Einstein’s Theory” surely is not true. But I am sure there are only twelve people in this world who can explain it like I am five twelve years old. Hendrik Antoon Lorentz who is credited with with sharing the development of his theory. So who better than him to explain it?

This book (free on Gutenberg) actually feels like science speeches given to public. Chapters are simple, straight forward and makes you wonder about our text books. Considering its just 50+ pages, its a must read. This was my first book about the relativity. Now I think I know enough to read the book by the man himself.

Some of my favorite quotes from the book:

Till now it was believed that time and space existed by themselves, even if there was nothing else—no sun, no earth, no stars—while now we know that time and space are not the vessel for the universe, but could not exist at all if there were no contents, namely, no sun, earth and other celestial bodies.

This special relativity, forming the first part of my theory, relates to all systems moving with uniform motion; that is, moving in a straight line with equal velocity. “Gradually I was led to the idea, seeming a very paradox in science, that it might apply equally to all moving systems, even of difform motion, and thus I developed the conception of general relativity which forms the second part of my theory.

Einstein’s theory has the very highest degree of aesthetic merit: every lover of the beautiful must wish it to be true. It gives a vast unified survey of the operations of nature, with a technical simplicity in the critical assumptions which makes the wealth of deductions astonishing.

If we limit the light to a flicker of the slightest duration, so that only a little bit, C, of a ray of light arises, or if we fix our attention upon a single vibration of light, C, while we on the other hand give to the projectile, B, a speed equal to that of light, then we can conclude that B and C in their continued motion can always remain next to each other. Now if we watch all this, not from the movable compartment, but from a place on the earth, then we shall note the usual falling movement of object A, which shows us that we have to deal with a sphere of gravitation. The projectile B will, in a bent path, vary more and more from a horizontal straight line, and the light will do the same, because if we observe the movements from another standpoint this can have no effect upon the remaining next to each other of B and C. The bending of a ray of light thus described is much too light on the surface of the earth to be observed.

Naturally, the phenomenon can only be observed when there is a total eclipse of the sun; then one can take photographs of neighboring stars and through comparing the plate with a picture of the same part of the heavens taken at a time when the sun was far removed from that point the sought-for movement to one side may become apparent.

It remains, moreover, as the first, and in most cases, sufficient, approximation. It is true that, according to Einstein’s theory, because it leaves us entirely free as to the way in which we wish to represent the phenomena, we can imagine an idea of the solar system in which the planets follow paths of peculiar form and the rays of light shine along sharply bent lines—think of a twisted and distorted planetarium—but in every case where we apply it to concrete questions we shall so arrange it that the planets describe almost exact ellipses and the rays of light almost straight lines.

It remains, moreover, as the first, and in most cases, sufficient, approximation. It is true that, according to Einstein’s theory, because it leaves us entirely free as to the way in which we wish to represent the phenomena, we can imagine an idea of the solar system in which the planets follow paths of peculiar form and the rays of light shine along sharply bent lines—think of a twisted and distorted planetarium—but in every case where we apply it to concrete questions we shall so arrange it that the planets describe almost exact ellipses and the rays of light almost straight lines.

28 Feb 16:38

theacademy: We found the droids we were looking for at...



theacademy:

We found the droids we were looking for at Oscars rehearsals

28 Feb 16:37

"Travel, which should be an adventure and an opportunity for encounters, has devolved so much toward..."

“Travel, which should be an adventure and an opportunity for encounters, has devolved so much toward being cost efficient that something has been lost.”

-

Deniz Gamze Erguven, cited by Kate Murphy in Download

28 Feb 16:37

"We have no good blueprint for how to integrate the contemporary intimacies of female friendship and..."

“We have no good blueprint for how to integrate the contemporary intimacies of female friendship and of marriage into one life.”

-

Rebecca Traister, What Women Find in Friends That They May Not Get From Love

28 Feb 16:36

How trust happens

by Eric Karjaluoto

A lot of designers ask their clients to trust them. What they’re really doing, at such moments, is asserting themselves as professionals. And, it bugs them to have their capabilities second guessed.

I’ve been there, and it’s a weird spot. On one level you feel emasculated. On another, you wonder why this person hired you in the first place. Still, the designer who responds, “just trust me,” is failing in his/her role. This designer also misunderstands how trust works.

The moment you ask your client to trust you, you’ve already lost—because you’re not really asking for their trust. You’re asking for faith. They don’t trust you, and now you’re making a plea for their charity. If they give it to you, they deny their feelings. This creates a tenuous set-up. They pretend to trust you, but they’re actually biting their tongues, out of courtesy.

You won’t get trust by asking for it, but you can earn it.

Probability is that your client likes you enough, but isn’t sure about what you’re presenting. Maybe your client is scared—and rightly so. If your strategy fizzles, you still have a pretty piece in your portfolio. However, your client might pay for your mis-step for years to come. So, cut them a little slack.

Besides, your job as a designer isn’t just to make something nice—or even something effective. Your job is to lead the process. You’re here to provide guidance—because your client probably doesn’t do this sort of work every day. So, you need to forget any sense of entitlement, and do your job.

Listen to their concerns. Ask more questions. Find out where the problem lies. Then suggest plans for moving forward. It’s not about you, or about how you feel. This role involves being a professional, and guiding your client to a solution they feel good about.

Once you’ve done that, you won’t need to ask them to trust you. They just will.

Trust isn’t only formed when you’re working with a client. You establish this through your history, your actions, and your consistency. Trust is something you work to build long before you’ve even met your clients.

Last week, my wife and I zipped by the credit union to put money into an RRSP. The young banker we met presented a few options. One was to make a slightly riskier investment, with a marginally higher rate of return. I had no interest in this, as rates are weak. Besides, I already have lots of risk as a result of our startup. This investment was simply a way to wipe out our tax bill—and keeping our money secure.

His body language was clear. He was disappointed. He wanted me to trust him. Maybe I should have. Perhaps we’d earn a few extra dollars over the year, as a result of doing so. I didn’t, though—because I don’t trust people I’ve only known for two minutes. Here’s a kid in a suit, with a slick hair cut. Is that enough to warrant my trust?

He doesn’t know me, or my wife, or our financial state. He doesn’t know what other investments we have, or what we’re looking to achieve. He might want me to trust him, but he’s done nothing to earn that trust. I sit politely, as he repeats the same pitch he’s told countless others. To me, though this is just a formality. His opinion is essentially worthless.

Now, if David Chilton were to walk in that room and tell me what to do, I’d probably do it. Even though I’ve never met him, his books earn my trust. I don’t know Paul Graham, either. But, if he told me our startup should go a different way, I’d pay attention—because I’ve read enough of his essays to trust him. Similarly, when Gordon Ramsay explains how to cook eggs, I follow his instructions—because… well, you get the drill.

Every post you write, every talk you give, every time you stand for something, you build trust. Maybe not with everyone, but likely with some. Sure, you can lose this trust—by abusing it, or by being inconsistent in your actions. For most, though, this isn’t an issue.

The takeaway from this, is that trust is valuable. Once you’ve earned it, you gain privileges others don’t. There aren’t any shortcuts, though. You can’t ask someone for their trust, any more than you can ask someone to love you. Instead, you need to do the work, and become someone who deserves their trust.

28 Feb 07:49

Sam Beckett’s Advanced Control Center Concept

by Federico Viticci

Control Center was introduced with iOS 7 in 2013, since then, it has benefited from minor visual tweaks and the recent inclusion of a Night Shift toggle with iOS 9.3. In future updates, it would be great to see Control Center gain more hardware and system toggles, along with the ability for users to customise which toggles they require and where they are positioned. An enhanced Control Center could also add support for 3D Touch for additional options and introduce a new system-wide dark mode.

I don't typically publish iOS concepts on MacStories, but Sam Beckett's latest video is so close to my idea for a customizable Control Center (see last year), I just couldn't resist. Tasteful, well researched, and with some great ideas for integrating 3D Touch, too.

28 Feb 07:43

Seeking Evidence of Badge Evidence

files/images/838px-Sherlock_Holmes_-_The_Man_with_the_Twisted_Lip-750x805.jpg


Alan Levine, CogDogBlog, Mar 01, 2016


I am in agreement with Alan Levine: " being badged is a passive act, even with blockchain secure authority, it is done to you. As important, is what you do yourself, in active tense, to demonstrate your own evidence. Get badged, yes, that’ s one part of showing what you have done. But get out there, get a domain, and show the world what you can do. That is evidence." Nobody would care what I have to say if all they saw were a few badges. But once I put my papers and articles out there, then they seen, and they decide for themselves whether I'm worth reading.

[Link] [Comment]
28 Feb 07:42

Ben Franklin, the Post Office and the Digital Public Sphere

by Ethan

My dear friend danah boyd led a fascinating day-long workshop at Data and Society in New York City today focused on algorithmic governance of the public sphere. I’m still not sure why she asked me to give opening remarks at the event, but I’m flattered she did, and it gave me a chance to dust off one of my favorite historical stories, as well as showing off a precious desktop toy, an action figure of Ben Franklin, given to me by my wife.


If you’re going to have a favorite founding father, Ben Franklin is not a bad choice. He wasn’t just an inventor, a scientist, a printer and a diplomat – he was a hustler. (As the scholar P. Diddy might have put it, he was all about the Benjamin.) Ben was a businessman, an entrepreneur, and he figured out that one of the best ways to have financial and political power in the Colonies was to control the means of communication. The job he held the longest was as postmaster, starting as postmaster of Philadelphia in 1737 and finally getting fired from his position as postmaster general of the Colonies in 1774, when the British finally figured out that he was a revolutionary who could not be trusted.

Being in charge of the postal system had a lot of benefits for Ben. He had ample opportunities to hand out patronage jobs to his friends and family, and he wasn’t shy about using franking privileges to send letters for free. But his real genius was in seeing the synergies between the family business – printing – and the post. Early in his career as a printer, Franklin bumped into one of the major challenges to publishers in the Colonies – if the postmaster didn’t like what you were writing about, you didn’t get to send your paper out to your subscribers. Once Ben had control over the post, he instituted a policy that was both progressive and profitable. Any publisher could distribute his newspaper via the post for a small, predictable, fixed fee.

What resulted from this policy was the emergence of a public sphere in the United States that was very different from the one Habermas describes, but one that was uniquely well suited to the American experiment. It was a distributed public sphere of newspapers and letters. And for a nation that spanned the distance between Boston and Charleston, a virtual, asynchronous public sphere mediated by print made more sense that one that centered around physical coffee houses.

Franklin died in 1790, but physician and revolutionary Benjamin Rush expanded on Franklin’s vision for a post office that would knit the nation together and provide a space for the political discussions necessary for a nation of self-governing citizens to rule themselves. In 1792, Rush authored The Post Office Act, which is one of the subtlest and most surprising pieces of 18th century legislation that you’ve never heard of.

The Post Office Act established the right of the government to control postal routes and gave citizens rights to privacy of their mail – which was deeply undermined by the Alien and Sedition Acts of 1798, but hey, who’s counting. But what may be most important about the Post Office Act is that it set up a very powerful cross subsidy. Rather than charging based on weight and distance, as they had before Franklin’s reforms, the US postal system offered tiered service based on the purpose of the speech being exchanged. Exchanging private letters was very costly, while sending newspapers was shockingly cheap: it cost a small fraction of the cost of a private letter to send a newspaper. As a result, newspapers represented 95% of the weight of the mails and 15% of the revenue in 1832. This pricing disparity led to the wonderful phenomenon of cheapskates purchasing newspapers, underlining or pricking holes with a pin under selected words and sending encoded letters home.

The low cost of mailing newspapers as well as the absence of stamp taxes or caution money, which made it incredibly prohibitively expensive to operate a press in England, allowed half of all American households to have a newspaper subscription in 1820, a rate that was orders of magnitude higher than in England or France. But the really crazy subsidy was the “exchange copy”. Newspapers could send copies to each other for free, with carriage costs paid by the post office. By 1840, The average newspaper received 4300 exchange copies a year – they were swimming in content, and thanks to extremely loose enforcement of copyright laws, a huge percentage of what appeared in the average newspaper was cut and pasted from other newspapers. This giant exchange of content was subsidized by high rates on those who used the posts for personal and commercial purposes.

This system worked really well, creating a postal service that was fiscally sustainable, and which aspired to universal service. By 1831, three quarters of US government civilian jobs were with the postal service. In an almost literal sense, the early US state was a postal service with a small representative government attached to it. But the postal system was huge because it needed to be – there were 8700 post offices by 1830, including over 400 in Massachusetts alone, which is saying something, as there are only 351 towns in Massachusetts.

I should note here that I don’t really know anything about early American history – I’m cribbing all of this from Paul Starr’s brilliant The Creation of the Media. But it’s a story I teach every year to my students because it helps explain the unique evolution of the public sphere in the US. Our founders built and regulated the postal system in such a way that its function as a sphere of public discourse was primary and its role as a tool for commerce and personal communication was secondary. They took on this massive undertaking explicitly because they believed that to have a self-governing nation, we needed not only representation in Congress, but a public sphere, a space for conversation about what the nation would and could be. And because the US was vast, and because the goal was to expand civic participation far beyond the urban bourgeois (not universal, of course, limited to property-owning white men), it needed to be a distributed, participatory public sphere.

As we look at the challenge we face today – understanding the influence of algorithms over the public sphere – it’s worth understanding what’s truly novel, and what’s actually got a deep historical basis. The notion of a private, commercial public sphere isn’t a new one. America’s early newspapers had an important civic function, but they were also loaded with advertising – 50-90% of the total content, in the late 18th century, which is why so many of them were called The Advertiser. What is new is our distaste for regulating commercial media. Whether through the subsidies I just described or through explicit mandates like the Fairness Doctrine, we’ve not historically been shy in insisting that the press take on civic functions. The anti-regulatory, corporate libertarian stance, built on the questionable assumptions that any press regulation is a violation of the first amendment and that any regulation of tech-centric industries will retard innovation, would likely have been surprising to our founders.

An increase in inclusivity of the public sphere isn’t new – in England, the press was open only to the wealthy and well-connected, while the situation was radically different in the colonies. And this explosion of media led to problems of information overload. Which means that gatekeeping isn’t new either – those newspapers that sorted through 4300 exchange copies a year to select and reprint content were engaged in curation and gatekeeping. Newspapers sought to give readers what an editor thought they wanted, much as social media algorithms promise to help us cope with the information explosion we face from our friends streams of baby photos. The processes editors have used to filter information were never transparent, hence the enthusiasm of the early 2000s for unfiltered media. What may be new is the pervasiveness of the gatekeeping that algorithms make possible, the invisibility of that filtering and the difficulty of choosing which filters you want shaping your conversation.

Ideological isolation isn’t new either. The press of the 1800s was fiercely opinionated and extremely partisan. In many ways, the Federalist and Republican parties emerged from networks of newspapers that shared ideologically consonant information – rather than a party press, the parties actually emerged from the press. But again, what’s novel now is the lack of transparency – when you read the New York Evening Post in 1801, you knew that Alexander Hamilton had founded it, and you knew it was a Federalist paper. Research by Christian Sandvig and Karrie Karahalios suggests that many users of Facebook don’t know that their friend feed is algorithmically curated, and don’t realize the way it may be shaped by the political leanings of their closest friends.

So I’m not here as a scholar of US press and postal history, or a researcher on algorithmic shaping of the public sphere. I’m here as a funder, as a board member of Open Society Foundation, one of the sponsors of this event. OSF works on a huge range of issues around the world, but a common thread to our work is our interest in the conditions that make it possible to have an open society. We’ve long been convinced that independent journalism is a key enabling factor of an open society, and despite the fact that George Soros is not exactly an active Twitter user, we are deeply committed to the idea that being able to access, publish, curate and share information is also an essential precursor to an open society, and that we should be engaged with battles against state censorship and for a neutral internet.

A little more than a year ago, OSF got together with a handful of other foundations – our co-sponsor MacArthur, the Ford Foundation, Knight, Mozilla – and started talking about the idea that there were problems facing the internet that governments and corporations were unlikely to solve. We started asking whether there was a productive role the foundation and nonprofit community could play in this space, around issues of privacy and surveillance, accessibility and openness, and the ways the internet can function as a networked public sphere. We launched the Netgain challenge last February, designed to solicit ideas on what problems foundations might take on. This summer, we held a deep dive on the question of the pipeline of technical talent into public service careers and have started funding projects focused on identifying, training, connecting and celebrating public interest technologists.

The NetGain Challenge: Ethan Zuckerman from Ford Foundation on Vimeo.

We know that the digital public sphere is important. What we don’t know is what, if anything, we should be doing to ensure that it’s inclusive, generative, more civil… less civil? We know we need to know more, which is why we’re here today.

I want to understand what role algorithms are really playing in this emergent public sphere, and I’m a big fan of entertaining the null hypothesis. I think it’s critical to ask what role algorithms are really playing, and whether – as Etyan Basky and Lada Adamic’s research suggests – that echo chambers are more a product of user’s choices than algorithmic intervention. (I argue in Rewire that while filter bubbles may be real, the power of homophily in constraining your access to information is far more powerful.) We need to situate the power of algorithms in relation to cultural and individual factors.

We need to understand what are potential risks and what are real risks. Much of my current work focuses on the ways making and disseminating media is a way of making social change, especially through attempting to shape and mold social norms. Algorithmic control of the public sphere is a very powerful factor if that’s the theory of change you’re operating within. But the feeling of many of my colleagues in the social change space is that the work we’re doing here today is important because we don’t fully understand what algorithmic control means for the public sphere, which means it’s essential that we study it.

danah and her team have brought together an amazing group of scholars, people doing cutting edge work on understand what algorithmic governance and control might and can mean. What I want to ask you to do is expand out beyond the scholarly questions you’re taking on and enter the realm of policy. As we figure out what algorithms are and aren’t doing to our civic dialog, what would we propose to do? How do we think about engineering a public sphere that’s inclusive, diverse and constructive without damaging freedom of speech, freedom to dissent, freedom to offend. How do we propose shaping engineered systems without damaging the freedom to innovate and create?

I’m finding that many of my questions these days boil down to this one: what do we want citizenship to be? That’s the essential question we need to ask when we consider what we want a public sphere to do – what do we expect of citizens, and what would they – we – need to fully and productively engage in civics. That’s a question our founders were asking almost three hundred years ago when Franklin started turning the posts and print into a public sphere, and it’s the question I hope we’ll take up today.