Shared posts

22 Jun 21:45

The rise of Managed Voice Infrastructure... or perhaps VIaaS or UCaaSaaS ?

by Dean Bubley
Over the last year or so, I've watched the emergence of a new category of communication service - but it's quite hard to define a decent name for the trend.

In essence, it's where telcos/SPs offer some form of VoIP, either for consumer telephony or enterprise UC/hosted-PBX... but don't actually manage the infrastructure internally themselves. Instead, a vendor manages and delivers the capability via its own software/cloud-based platform. The SP sells (perhaps customising / integrating / bundling) the "outsourced" service to final end-users. It's not really the same as a straightforward channel play (eg resale of SaaS applications), because there's typically various network and OSS integration aspects, such as the need for SBCs, link with numbering and perhaps QoS, coupling with emerging NFV strategies etc.

The most prominent providers are probably:
  • BroadSoft BroadCloud
  • GenBand Nuvia
  • Alianza Cloud Voice Platform
  • Metaswitch MetaSphere Cloud Services
In addition to these, there are probably more that I'm not fully aware of. Alongside this, various other vendors are offering managed/hosted VoLTE, API/PaaS platforms and other applications (yes, even Jibe's hosted-RCS, now owned by Google) also intended to be offered through SP channels.

But for me, the main event here is service providers outsourcing their "core" telephony functions - including business-oriented comms which typically go beyond just voice - to vendor-controlled cloud platforms.

If you take the view that it's too difficult for SPs to develop their own innovative and differentiated consumer-voice or UCaaS platforms, then the next question is whether they really add any value by deploying their own servers for somebody else's application either. This is especially true for "vanilla" applications like PSTN telephony, SIP trunking and basic hosted-PBX functionality.

The answer is probably one of scale, risk-appetite, and the ability and willingness to attract software developers - as well as the target audience of end-users that a given SP faces. Larger operators ought to be able to have the resources to develop unique and well-customised propositions, even if they are based on a vendor's application platform or straightforward standardised telephony functions. 

Smaller providers - especially in rural areas - may find the idea of such hosted/wholesale offers more intriguing, especially as the cost-models may allow then to phase in such products over time without heavy upfront investmentment. 

In developing countries, the initial user-base for UCaaS might be very low as well - especially if broadband infrastructure is still being built-out and adopted by small businesses. Again, a wholesale offer might make sense, assuming the data-centres are close enough and long-haul links suitable.

There may also be a fixed/mobile divide here - it could be argued that residential fixed/cable operators have better things to focus on than infrastructure for delivering a commodity telephone service that few people use. Or, perhaps, mobile-centric operators may not have the ability and willingness to deal with the hybrid fixed/mobile nature of most UCaaS.

I think we're still at early stages here. Vendors are mostly being quite cautious, to avoid looking as if they compete with their SP customers, or are taking too much of the available profit-pool in that sector. Yet as NFV matures, both telephony and UCaaS do start to look a bit more like today's other SaaS offers - which can be hosted in telcos' own data-centres, but are more often anchored in Microsoft's or Amazon's.

It's also hard to come up with a pleasing category-name for this. Some people have suggested "wholesale telephony", but that's too closely-similar to various other traditional bits of the wholesale/transit market. "White-label voice & UC" is another option, but I've spoken to some people who recoil from that. "Cloud voice" can apply to about a dozen separate intersections of communications and cloud, so is too vague in my view. For the business side, I quite like UCaaSaaS but that's too much of a mouthful, although it looks fun on screen.

Maybe "VIaaS"? Voice-Infrastructure as a Service? Or my current favourite: MVI, Managed Voice Infrastructure? Some will grumble that it makes the video and messaging/collaboration bits look like second-class citizens, but I think it perhaps captures the essence of what's occurring - and it also spans both consumer VoIP telephony and UCaaS.
22 Jun 21:45

Building Static Wasp Lisp Binaries on Linux

Wasp Lisp builds binaries that are linked dynamically to glibc. This ties the binary to specific versions of Linux. It's usually not possible to run on an OS with older glibc versions than what it was compiled against. I wanted to be able to run a single binary of Wasp Lisp and MOSREF drones on new Ubuntu versions and some machines with an older version of Ubuntu. To do this I needed to have the libc linked statically.

Changing Wasp Lisp to statically link glibc doesn't work though. Some networking routines in glibc require dynamic linking. If glibc is statically linked then networking doesn't work.

The solution I opted for is to use musl libc instead of glibc. This is a libc that was designed to be statically linked. To buid Wasp Lisp binaries with musl it required:

  • Building musl libc
  • Building libevent using musl libc headers
  • Building Wasp Lisp against musl and libevent

Building musl libc

Building musl libc requires using git to clone the repository and following the standard configure, make, make install invocations. The bin directory for the musl tools is added to the PATH:

$ git clone git://git.musl-libc.org/musl
$ cd musl
$ ./configure
$ make
$ sudo make install
$ export PATH=$PATH:/usr/local/musl/bin/

Building libevent

Building libevent with musl requires using the musl-gcc command which was installed by the previous step. This invokes GCC with the required options to use musl. The following steps performs the build:

$ wget https://github.com/libevent/libevent/releases/download/release-2.0.22-stable/libevent-2.0.22-stable.tar.gz
$ tar xvf libevent-2.0.22-stable.tar.gz
$ cd libevent-2.0.22-stable/
$ ./configure --prefix=/tmp/musl/usr CC=musl-gcc --enable-static --disable-shared
$ make
$ make install

Building Wasp Lisp

The Wasp VM source requires a change to the Makefile.cf to use static linking for all libraries. This changes:

EXEFLAGS += -Wl,-Bstatic $(STATICLIBS) -Wl,-Bdynamic $(DYNAMICLIBS)

to:

EXEFLAGS += -static $(STATICLIBS) $(DYNAMICLIBS)

I've made this change in the static branch of my github fork . This branch also includes some other changes from the official repository for real number support. Building with musl and libevent is done with:

$ git clone https://github.com/doublec/WaspVM --branch static
$ cd WaspVM
$ CC=musl-gcc CFLAGS="-I /tmp/musl/usr/include -L /tmp/musl/usr/lib" make repl

This runs directly into the Lisp REPL. The following confirms a static binary:

$ ldd wasp
not a dynamic executable

Building MOSREF

The stub generated is also static and can be used to build static drones:

$ cd mod
$ ../waspc -exe ../mosref bin/mosref
$ chmod +x ../mosref
$ ../mosref
console> set addr=xx.xx.xx.xx
console> set port=8000
console> drone mydrone foo linux-x86_64
Drone executable created.

The generated drone should run on a wider range of Linux versions than the non-static build at the cost of a larger size. I rename the waspvm-linux-x86_64 stub to be waspvm-musl-x86-64 so I can generate static drones or dynamic linked drones as needed from the MOSREF console by using linux-x86_64 or musl-x86_64 respectively.

22 Jun 21:45

Five-word movie review: The Hobbit: The Battle of the Five Armies

by sheppy

Good but unlike the book.

22 Jun 21:44

Vancouver mayor (again): Do something, province, about “unregulated speculative global capital”

by Frances Bula

Temperature just keeps going up in this city. The climate change of debate heat.

June 5th, 2016 (Vancouver, BC) – In the past week, we’ve heard a growing chorus of concerns about the economic risk posed by skyrocketing housing prices in Vancouver. Both the Bank of Nova Scotia and the National Bank of Canada urged the federal government to intervene in our housing market, and the OECD’s recent Global Economic Outlook warned that Vancouver’s economy is at risk due to rising household debt and surging housing prices.

It’s fitting that this was the same week as the Real Estate Board of Greater Vancouver released new data showing a 37% increase in year over year detached housing prices in Metro Vancouver. These trends are not sustainable and we need to be wide awake to the risks they pose to the stability of our economy, let alone the impact they have in pushing local residents, especially young people, families, and seniors, out of our neighbourhoods.

While adding more housing supply is crucial, it is not an affordability solution on its own. With unregulated, speculative global capital flowing into Metro Vancouver’s real estate, we are seeing housing prices completely disconnected from local incomes. First and foremost, housing needs to be for homes, not just treated as a commodity.

I urge the provincial and federal governments to heed the warnings from the financial sector and implement clear measures to rein in the excesses of Vancouver’s housing market. The CEO of Scotiabank spoke out in support of a luxury sales tax. The deputy chief economist at CIBC supports a ‘flipping’ tax as a measure to reduce speculation. I support both these tools and will continue to aggressively advocate for them to the federal and provincial governments as a way to help create a level playing field in the Vancouver housing market.

22 Jun 21:44

Is it possible for Vancouver to do an affordable home-ownership plan, given our insane land prices?

by Frances Bula

Vancouver has said it’s going to start a small “affordable home-ownership” program, with an initial 300 units over three years.

It’s hardly the first city to do this. There are programs all over the continent that have started up as resort towns, universities, mid-sized cities, and large cities have all tried to find a way to ensure that one group of potential residents — people making decent but not fabulous incomes — can find a place to live in their communities.

In a recent story for the Globe, I talked to the people who run some of the programs elsewhere: Toronto, Whistler, UBC. One woman who’s been doing this for a while, Heather Tremain, had serious reservations about how Vancouver is going to be able to do this, given land prices.

In a recent speech to the UDI, Bob Rennie (marketing guru, Svengali, adviser and fundraiser for a wide variety of politicians) suggested that Vancouver should stay out of this game because, even with a 20-per-cent reduction in price that it’s proposing, that isn’t any better than what the same buyer could get by moving to Burnaby or Coquitlam.

My very certain guess is that Vancouver will pay no attention to Rennie on this one. Yes, it’s true, all housing problems can theoretically be solved by telling people to drive until they find something they can afford. But many civic leaders, housing advocates, regular folks are uncomfortable with that.

One: That then risks producing a city populated by only two kinds of people, the extremely wealthy and a service class, willing to cram into tiny, shared spaces in the cheaper parts of town. That’s not a healthy mix.

Two: It takes away a choice that many feel should be on offer. That is: if you really, really want to be in the heart of your community and you’re willing to accept less space and less of a profit on your home, we will find an option for you.

After all, affordable home-ownership plans, the ones that survive, aren’t really offering a discount. They are offering to let someone buy at below-market rate, in return for selling at below-market rate. Many people won’t like that deal. Either they aren’t that wedded to the city and they’ll move to Burnaby or Coquitlam, where they can buy outright and perhaps get something bigger. Or they’ll find a way to stretch to buy at market price in Vancouver, so they can get the full profit.

I should say that I continue to hear from people about one of the niggling problems with these programs, which is enforcing the rules. One affordable home-ownership effort that BC Housing took part in turned out to be somewhat disappointing. People got a deal to be able to buy — and then their units started showing up as rentals on Craigslist, in completion violation of the spirit and the letter of their agreements.

 

 

22 Jun 21:44

The $200 Monoprice Maker Select Mini 3D Printer

by John

I’ve talked about inexpensive 3D printers for a while now. It’s going to help get a lot more people interested in 3D printing but many of the printers on the market fall into that ‘you get what you pay for’ category and fall short. Until now. Monoprice just started shipping their $200 Maker Select Mini 3D printer.

I literally heard about this printer a week ago and I’ve had one delivered on Friday. I bought mine through Amazon.com partly because I was skeptical it was as good as it seemed and figured that it would be easier to return if I wasn’t happy. Monoprice lists the printer at $199 but Amazon had it for $215 with shipping. I had it shipped to my US mailbox in Lynden and the full charge worked out to $290 Canadian. I declared it at the border and didn’t have to pay any further taxes or duty.

Strangely they originally had 3 different buying options (all from Monoprice) on Amazon but with and without taxes, with and without Amazon Prime shipping and one with both tax and not-free shipping. As I write this it’s sold out everywhere with restocks not likely until mid-July.

As I’ve only had it for a weekend, these are my impressions so far. Most of my prints were done with Hatchbox (green) or Proto-Pasta (coffee) PLA. Both printed wonderfully.

Maker Select Mini

In the box you get the printer, a single piece of paper telling you where to download the manual from, a power supply, an allen key (for leveling the bed platform), a plastic spatula (for scraping off your prints from the bed), a paltry 256mb SD card and a micro USB cable. It doesn’t come with any filament so make sure you have some 1.75mm PLA on hand or order it when you order the printer.

The build area (aka platform) is 120mm x 120mm x 120mm – it’s just shy of being able to print an object the height of a can of pop:

Maker Select Mini

For software, the manual recommends using Cura or Repetier. I like Cura so used that to prepare my files for printing. It should also work with Simplify 3D if you manually create a profile. I figure someone will ask for my Cura settings so here they are:

CuraCura

It has one of the nicest and accessible interfaces I’ve ever seen on a 3d printer, let alone one that cost this much. A bright, full colour display that clearly displays the information you need to be able to load filament and print. It’s worth noting that rarely does a low cost printer come with SD card/LCD control. My M3D required a tethered computer to operate, likely to keep the costs down. This printer makes it a great experience without the need of a tethered computer for printing (you still need one to prepare the files though).

Control interface

Despite Monoprice’s website saying it’s a touchscreen, it’s not which is too bad as the scroll wheel (which has a nice satisfying click and backlit blue light) is kind of blocked by the build plate sometimes. The display shows the currently printing file, percent complete, bed & extruder temperatures and a simple to access cancel button. You can also adjust the print speed right from this screen while printing.

Maker Select Mini

Choosing a file to print is also really nice…unlike most arduino based print controllers, it clearly displays the full filename.

Preheating and moving the platform are also done in a nice, clear way. Yes it has a heated bed too! But it’s limited to how hot it can get so don’t expect to print lots of exotic filaments with this without modding the crap out of it. Likely the limitation is due to the 10amp power supply but it’s also baked into the firmware. Stick to PLA filament and you’ll be fine. You don’t even need to heat the bed for PLA.

Preheat mode
Maker Select Mini

When you’re finished a print, it displays the elapsed time and gives you the option to print the file again or go back to the main menu.

Print finished

Print finished

The other big thing that I find hard to believe is the build quality of this printer. It weighs over 16lbs and is made from folded sheet metal. Everything is accessible but not overly so like most budget printers which is good for use around children.

Maker Select Mini Micro USB & Micro SD card slot Extruder w/30mm fan Bowden drive Power switch & AC adapter

Here’s some of the print quality I’ve gotten so far:
Sample file from included SD card 3D Benchy my cactus My custom fan guard 3DKit Bash makes great models Ghost in Coffee filament Ghost in coffee filament bottom of Ghost Print-in-place gear Articulated hand Folding car Folding car

I’ll be posting more photos and videos to my Flickr album as they come off the printer.

This video shows it printing a 3d scan of me and gives you an idea of the sounds it makes while printing. It’s not silent but it’s not crazy loud either.

Printing a mini-me

Maker Select Mini

It’s not the best quality print I’ve done from a scan, but it’s not horrible either…some of the blips and zits can easily be addressed with settings.

Still, it really is incredible to get this kind of quality from a $200 printer with minimal fuss, right out of the box. I have spent a little time tweaking the print settings so that doesn’t hurt but even the elephant which was on the included sd card printed great and it’s easy to clone it’s print settings into Cura to replicate that experience.

This is a print of the Fillenium Malcon (heh) done on the Mini and a $2000 printer:

Maker Select Mini

Can you tell them apart? Sure you can…but there are ways to make the lessor one better quality just by tweaking the software and print settings. The more expensive printers generally have more robust hardware, have bigger build areas and support a wider variety of filament. But this seems to be the perfect ‘starter’ printer that combines ease of use, build & print quality and is affordable. It’s a machine like this that will help people decide if this is something they want to spend more time and money on, without dropping a lot of money on a bigger machine (which they will likely want once they get their feet wet).

I find it really hard not to recommend this printer to anyone that wants to get into 3D printing at a super low cost, without having to build it from a kit nor fuss with much right out of the box. I don’t feel like this printer is going to fall apart anytime soon compared to some of the kits I’ve seen. The other elements this printer (great LCD display, heated bed, sheet metal construction) has also make it feel like a printer costing double or even triple what I paid for it.

One thing that I can’t stress enough is that having access to inexpensive (but still decent) technology like this printer for schools, students, individuals and enthusiasts is crucial in moving this technology forward. I literally see thousands of people each year that get really excited about 3d printing but often that’s where it ends because the cost of entry is too high and there aren’t enough places around that let people actually use 3d printers. Sure lots of libraries and schools are getting them but they never have enough machines to meet the demand and often require their technician to operate them for fear of breaking them. It’s hard not to think of this machine (and the range of clones sure to follow) won’t be the tipping point in the mainstream 3d printing realm. Mattel’s $300 3d printer is due in October and people were thinking that might the machine that tips things…I think we may already be there.

Update (June 13): One thing I didn’t cover originally is the fact that upon boot up, the Mini had a strange message:

IMG_7543

It looked like ‘no wifi connection’. I had attributed it to possibly a generic firmware they were using that was also used on the Pro machine coming soon. But it turns out thanks to folks that like to take things apart, the controller does indeed support wifi. Using the non-Monoprice firmware, some folks have been able to enable wifi and use it remotely. There are also apparently desktop and smartphone apps for control and for sending prints to the printer. I’m going to give it a week or so to see if Monoprice releases updated firmware (which is also supposed to address the nozzle temperature issue where some people have noticed 5C +/- swings of temps while printing that should be handled by proper PID values). It also turns out that the controller itself is a 32-bit ARM Cortex M3 which explains the snazzy user interface. This little printer just keeps getting better and better for the money.

See this in-depth technical review at Hackaday as well. There is also a rapidly growing group on Facebook.

Also, today I received a this from Monoprice:

IMG_7348

I appreciate the gesture, it’s just such a small amount of filament to be basically useless. The discount code is nice – it’s generic so feel free to use it as I don’t need any more filament for a while. It’s also not Monoprice’s fault but this was shipped to my US mailbox. I wasn’t sure what it was when I got notification of a parcel being delivered so the trip and parcel fee kind of cancel out any ‘bonus’ for me. But at least I could do a grocery run.

Update 2: I couldn’t wait and went ahead and updated the firmware and got the wifi working:

wifi-monoprice

I was able to get my Android tablet (Nexus 7) to connect using the Malyan app. It just seems to be good for remote monitoring the printer so far.

wifi-monoprice-android

Here’s a video of configuring and remotely controlling the printer via Repetier Host:

The post The $200 Monoprice Maker Select Mini 3D Printer appeared first on johnbiehler.com.

22 Jun 21:44

À propos de Jacob Appelbaum

by Tristan

Jacob Appelbaum, développeur Tor, quitte le projet car il se retrouve au cœur d’un scandale de harcèlement (sexuel et psychologique) qui n’est pas sans rappeler l’affaire Baupin :

22 Jun 21:44

Where people go to and from work

by Nathan Yau

commute

With an animated take on the commute map, Mark Evans shows where people commute to work.

The resulting animations are somewhat hypnotic (even my dog seemed to go into a trance watching them leading to minutes of human amusement) but also provide a visual way of quickly seeing the distribution of workers into a given city. The points are sized based on the number of commuters, so a large dot indicates a higher relative number of commuters moving from the same tract to the same tract. The dots are also color coded to see which counties are most represented in the commuter sample.

Just select a county to see. [Thanks, @Mikey_Two]

Tags: commute

22 Jun 21:44

How to hit on a stranger: two contrasting pick-up lines

by Josh Bernoff

You’re going to need help from people you don’t know. Getting them to respond is like hitting on a stranger. Your pick-up line makes all the difference. Two pick-up lines, two reactions Two strangers contacted me last week. Each wanted a conversation. (Since they were personal messages, I’ve concealed the identities of the senders.) “Lenny” starting hitting on … Continue reading How to hit on a stranger: two contrasting pick-up lines →

The post How to hit on a stranger: two contrasting pick-up lines appeared first on without bullshit.

22 Jun 21:44

Book: Anarchism and Other Essays

by Thejesh GN

Anarchism and Other Essays

Anarchism and Other Essays

A web spiral on Labor day celebrations in India landed me on a Wikipedia page of a well known anarchist and feminist of her day. Though I was very uncomfortable 1 with some of her methods, her views on many things are very interesting and dare I say very relevant today. As a further reading I picked up this book Anarchism and Other Essays which outlines her ideas on anarchism, prisons, political violence, education, sex, women’s rights, art etc.

Emma born in Russia 2, emigrated to the United States in 1885. She literally became the spokes person of anarchism influenced by Haymarket affair 3. She wrote and lectured about variety of issues, including prisons, atheism, freedom of speech, capitalism, marriage, free love, and homosexuality. The essays here deal with the same subjects in detail. She went to jail multiple times for various reasons including distributing information about birth control, inspiring persons not to register for the draft, etc. Later she was deported to Russia using Anarchist Exclusion Act. She died at the age of 70 in Toronto. There is so much to read about her life. Wikipedia is a good place to start.

I knew very little about anarchism before starting this book. This book worked as an introduction to anarchism for me. There are many solutions that anarchism offers that I agree and some that I don’t. I wish there was a system which was a good mix of democracy and anarchism. My biggest issue with the essays here and generally anarchism4 is the acceptance of tactical use of violence. For me violence in any form is unacceptable. Once I accept that difference I was very comfortable with the rest of the ideas. Though it was published in 1910, the questions Emma raises are the same questions that people are still asking. It makes me wonder what happened in last hundred years.

The book is a must read for anyone who is interested in anarchism and related ideas. It’s in Public domain. Book is full of quotable quotes, here are some of my favorites.

Anarchism asserts the possibility of an organization without discipline, fear, or punishment, and without the pressure of poverty: a new social organism which will make an end to the terrible struggle for the means of existence — the savage struggle which undermines the finest qualities in man, and ever widens the social abyss. In short, Anarchism strives towards a social organization which will establish well-being for all.

My lack of faith in the majority is dictated by my faith in the potentialities of the individual. Only when the latter becomes free to choose his associates for a common purpose, can we hope for order and harmony out of this world of chaos and inequality.

A thorough perusal of the history of human development will disclose two elements in bitter conflict with each other; elements that are only now beginning to be understood, not as foreign to each other, but as closely related and truly harmonious, if only placed in proper environment: the individual and social instincts. The individual and society have waged a relentless and bloody battle for ages, each striving for supremacy, because each was blind to the value and importance of the other. The individual and social instincts — the one a most potent factor for individual endeavor, for growth, aspiration, self-realization; the other an equally potent factor for mutual helpfulness and social well-being.

Religion! How it dominates man’s mind, how it humiliates and degrades his soul. God is everything, man is nothing, says religion. But out of that nothing God has created a kingdom so despotic, so tyrannical, so cruel, so terribly exacting that naught but gloom and tears and blood have ruled the world since gods began.

A natural law is that factor in man which asserts itself freely and spontaneously without any external force, in harmony with the requirements of nature. For instance, the demand for nutrition, for sex gratification, for light, air, and exercise, is a natural law. But its expression needs not the machinery of government, needs not the club, the gun, the handcuff, or the prison.

Anarchism, then, really stands for the liberation of the human mind from the dominion of religion; the liberation of the human body from the dominion of property; liberation from the shackles and restraint of government. Anarchism stands for a social order based on the free grouping of individuals for the purpose of producing real social wealth; an order that will guarantee to every human being free access to the earth and full enjoyment of the necessities of life, according to individual desires, tastes, and inclinations.

Even voting for the right thing is doing nothing for it. A wise man will not leave the right to the mercy of chance, nor wish it to prevail through the power of the majority.

Will it not lead to a revolution? Indeed, it will. No real social change has ever come about without a revolution. People are either not familiar with their history, or they have not yet learned that revolution is but thought carried into action.

In politics, naught but quantity counts. In proportion to its increase, however, principles, ideals, justice, and uprightness are completely swamped by the array of numbers. In the struggle for supremacy the various political parties outdo each other in trickery, deceit, cunning, and shady machinations, confident that the one who succeeds is sure to be hailed by the majority as the victor. That is the only god — Success. As to what expense, what terrible cost to character, is of no moment. We have not far to go in search of proof to verify this sad fact.

Today, as then, public opinion is the omnipresent tyrant; today, as then, the majority represents a mass of cowards, willing to accept him who mirrors its own soul and mind poverty.

What, then, is patriotism? “Patriotism, sir, is the last resort of scoundrels,” said Dr. Johnson. Leo Tolstoy, the greatest anti-patriot of our times, defines patriotism as the principle that will justify the training of wholesale murderers; a trade that requires better equipment for the exercise of man-killing than the making of such necessities of life as shoes, clothing, and houses; a trade that guarantees better returns and greater glory than that of the average workingman.

Society considers the sex experiences of a man as attributes of his general development, while similar experiences in the life of a woman are looked upon as a terrible calamity, a loss of honor and of all that is good and noble in a human being. This double standard of morality has played no little part in the creation and perpetuation of prostitution. It involves the keeping of the young in absolute ignorance on sex matters, which alleged “innocence,” together with an overwrought and stifled sex nature, helps to bring about a state of affairs that our Puritans are so anxious to avoid or prevent.

If love does not know how to give and take without restrictions, it is not love, but a transaction that never fails to lay stress on a plus and a minus.

The demand for equal rights in every vocation of life is just and fair; but, after all, the most vital right is the right to love and be loved. Indeed, if partial emancipation is to become a complete and true emancipation of woman, it will have to do away with the ridiculous notion that to be loved, to be sweetheart and mother, is synonymous with being slave or subordinate. It will have to do away with the absurd notion of the dualism of the sexes, or that man and woman represent two antagonistic worlds.

Free love? As if love is anything but free! Man has bought brains, but all the millions in the world have failed to buy love. Man has subdued bodies, but all the power on earth has been unable to subdue love. Man has conquered whole nations, but all his armies could not conquer love. Man has chained and fettered the spirit, but he has been utterly helpless before love. High on a throne, with all the splendor and pomp his gold can command, man is yet poor and desolate, if love passes him by. And if it stays, the poorest hovel is radiant with warmth, with life and color. Thus love has the magic power to make of a beggar a king. Yes, love is free; it can dwell in no other atmosphere. In freedom it gives itself unreservedly, abundantly, completely. All the laws on the statutes, all the courts in the universe, cannot tear it from the soil, once love has taken root. If, however, the soil is sterile, how can marriage make it bear fruit? It is like the last desperate struggle of fleeting life against death.

  1. Tactical uses of violence
  2. present-day Kaunas, Lithuania
  3. Peaceful rally in support of workers striking for an eight-hour day, turned into series of violent events after killings by police. The Haymarket affair is generally considered significant as the origin of international May Day observances for workers. – Wikipedia
  4. As explained in this book
22 Jun 21:44

The Inconvenience of $1.50

by Nora Devlin

IMG_5207

As I walk in, I glance at the sign that says “Local 1964 AFL-CIO” and smile at the greeter who stands by the front door. Walking down the aisle I peruse the household cleaners. All natural, $4; organic, $9; wood-safe, $3.Taken slightly aback by the cost, I pull out my phone and bring up the same products in the Amazon app: $1.50 difference. I pause to think about that $1.50 and spiral into an internal dialogue about capitalism and my place within it.  Not only do these chemicals eventually go down my drain back into a water source that I share with my neighbors, but the money I spent represents my values. $1.50 is less than the price of a small ice cream, and also, apparently, the margin necessary to pay the local grocery workers a living wage.  I pick up the product off the shelf— the $4 natural multi-surface cleaner—and continue down my list.

The presence and use of Amazon’s apps on my phone are part of a larger socioeconomic process. As gentrification runs rampant in my small city, the local use of Amazon and other delivery apps increases and local union grocery stores have closed one after another, resulting in many unemployed people. One of the many insidious aspects of late capitalism is its ability to force a competition between time-saving and wage-saving.  The convenience of technology necessitates further trust in and reliance on the rest of society. Or, as PJ Rey puts it: “There is no such thing as a lone cyborg.” What we often ignore, however, is how our choice of convenience simultaneously necessitates that our local community also become more reliant on large infrastructures and less self-sustaining. As Christian Fuchs explains in Labor in Informational Capitalism and on the Internet, the unemployed class is an inevitable byproduct of technological progress in a capitalist society that must be continuously deprived of wage labor and capital.

It is difficult to connect Amazon saving us $1.50 on surface cleaner to rising unemployment numbers. Throughout my life in capitalist cultures, capitalism has taught me, us, to think that saving $1.50 every time we shop is savvy. Bargain hunting has even been elevated to televised sport: Design on a Dime or Extreme Couponing come to mind. If we know anything about a commodity it is how much it costs and speculations on how much we can underpay are seen as responsible financial planning. What is not under consideration is which product does a better job of preserving people’s opportunities to work with dignity. I and everyone else have little information about which soap helps retain collective bargaining power or which brand of toilet paper supports union-busting political action committees. The social life of things prior to their arrival on shelves is purposefully obscured by a veil of individual consumerism.

Corporations like Amazon have reinforced this individualism by investing in technology that emphasizes the convenience for the individual consumer who shops online. Inherent to their designs is the hope that Amazon will make our purchases thoughtless and instant. The less time we spend making a decision, the fewer opportunities there are to second-guess the implications of our actions. Let me be clear, this convenience is what we love about Amazon. I forget things a lot, and when I do remember to buy something, it is seldom while I am at the store; the Amazon app and Free Prime Shipping have almost always had my back in those times. In the end, “Free Prime Shipping,” Amazon Pantry, and Dash Buttons cost something to someone, and we are all affected by the price we pay for the convenience of “free.”

If you are anything like me (graduate student, part-time jobs) you have probably felt like you have “benefited” from Amazon’s low (i.e. not profitable) prices at some point. I admit to buying books, board games, and other random items from Amazon pretty regularly. In moments of price comparison, Amazon beats my other options in its quest for my money more often than I would like to admit so publicly (but I know the data is out there somewhere). But do we really benefit from Amazon’s convenience? There are a couple of ways to look at this. One way is the old Econ 101 fairy tale of every individual maximizing utility and seeking out the lowest price for the best product. On the other hand, we can take a Marxist perspective and examine how the capitalist class is actually exploiting all of us—not just those laborers who are out of work when the grocery stores close—through its promotion of individual convenience.

So how does Amazon not just exploit those whose jobs are threatened by the convenience market that Amazon provides, but exploit those of us who use the service, too? Well, for one, these two groups—those who use Amazon and those who Amazon puts out of work—are not mutually exclusive. Those in positions of economic precariousness may well rely on the discount goods and cheap/free delivery that Amazon provides—just like the local shopkeeper may have to patronize the Walmart that threatens hir livelihood.

Second, free and cheap delivery is not free or cheap, but paid for with the currency of personal data. Amazon does not just know what I buy, it knows what I want, how much of it I want, and when I think about purchasing what I want. Amazon knows my birthday and my relatives’ birthdays. And with Charmin’s dash button, Amazon can know an uncomfortable amount about a household’s bathroom habits. Economic value, Christian Fuchs notes, is produced by the commodification of users and audiences. Capital is (re)produced for Amazon through reviews, purchase history, and other personal information; they are not making a profit from the majority of their sales. Rather, they are trading short-term profits for the the long term advantage of being indispensable to our buying and spending habits. Therefore, Fuchs argues that even yuppie consumers are included in this exploited (non-capitalist) class. Amazon customers don’t just buy goods, but freely give the capitalist their data. Each purchase is therefore an act of uncompensated labor.

This relationship between conveniences, data, and wages are complex but getting out of the problem is fairly simple. “None of this is a problem” writes David A. Banks, “if we decouple work from individuals’ means of subsistence. If extended unemployment were not tantamount to a slowly unfurling death sentence, then the rise of robot deliveries would be very desirable save for those that gleam some sort of emotional reward for delivering the mail.”

How would we ever go about disconnecting the means of subsistence from the modes of production? If we are actually members of the multitude and Amazon actually profits from our mere potential as consumers, what can we really do but submit to the corporate overlords? According to Fuchs, exploitation of prosumers, “has the potential to be channeled into political demands, such as the demand for a wage for all unpaid knowledge workers, which is equivalent to the demand for the introduction of a universal basic income guarantee.” Fuchs argues that, united, we can reassert our collective economic power by resisting our own exploitation. Unlike Fuchs, I’m not ready to say that Twitter and Facebook need to start paying users for their content production. I believe this would probably entrench us all much more fully in our own oppression by transforming leisure time into paid participation in yet another market designed to further capitalist interests. Nevertheless, I agree with Fuchs that a Marxist outlook can help us to reflect on the fact that our online leisure and shopping activities are, in fact, forms of labor. When given the chance to think about concrete ways we as tech-users could leverage this labor in acts of resistance, the opportunities are vast.

As content creators, users of Facebook, Twitter, and even Amazon are uncompensated workers. Imagine one day, even one hour, where the internet goes silent and content is not produced. What if bloggers protested google ads for higher wages by striking?  With 700,000 weekly tweets, Scandal is certainly not paying everyone for their buzz on Twitter. If that’s not collective economic power, I don’t know what is. But even if we are not a member of a blogger’s union and don’t often find ourselves in hugely popular hashtag conversations on Twitter, we have plenty of moments when we can question the status quo.

Despite convenience and low cost being tantalizing and often, necessary for my own functioning in everyday life, the moments when I open the Amazon app are as much opportunities for reflection as they are for impulse purchases. The reality of a shared car, too long of a distance to walk home with laundry detergent, and limited time, let alone money, have all played a role in my Amazon Prime membership quickly “paying for itself.” But the person I really want to be is the thoughtful, neighborly cyborg who smiles to a greeter at a grocery store who can pay rent and feed his kids. I want to be a person who opens the Amazon app in moments of resistance—to see how much I am willing to sacrifice for my values—rather than temptation. If the price of becoming who I want to be is the price of a living wage, I will pay my $1.50 willingly.

Nora Devlin is too multifarious to describe herself in only two sentences. Tweet her @NoraAnneDevlin and read her occasional ramblings at noradev.tumblr.com.

22 Jun 21:43

Firefox 48 Beta, Release, and E10S

by Asa Dotzler

Tomorrow In the next few days, Firefox 48 Beta becomes available. If all goes well in our beta testing, we’re about 6 weeks away from shipping the first phase of E10S to Firefox release users with the launch of Firefox 48 on August 2nd.

E10S is short for “Electrolysis”. Similar to how chemists can use the technique called electrolysis to split water into hydrogen and oxygen, we’re using project Electrolysis to split Firefox into a UI process and a content process. Splitting UI from content means that when a web page is devouring your computer’s processor,  your tabs and buttons and menus won’t lock up too.

E10S has been enabled for some portion of our Beta audience since December of 2015 and we’ve had it enabled for half of our Beta population for the last 6 weeks. The team has been comparing the half with E10S to the half without for things like stability, responsiveness, memory usage, and more. And so far, so good. We’ve met all of our release criteria and assuming nothing shows up in Beta 48, we should be good to go.

(When we hit release in about six weeks, not all of our Firefox 48 users will get E10S. The teams have been working really hard but we’ve still got some compatibility and other work to do to make E10S ready for everyone. The groups that will have to wait a bit for E10S account for about half of our release users and include Windows XP users, users with screen readers, RTL users, and the largest group, extension users.)

This is a huge change for Firefox, the largest we’ve ever shipped. But don’t worry. The Electrolysis team at Mozilla has a release roll-out plan that ensures we’re going slowly, measuring as we go and that we can throttle up as well as down depending on what we see.

Here’s what that looks like. When we launch Firefox 48, approximately 1% of eligible Firefox users will get updated to E10S immediately. The 1% of release users should get us up to a population similar to what we have in Beta so we’ll be able compare the two. About ten days after launch, we’ll get another round of feedback and analysis related to the release users with and without E10S. Assuming all is well, we’ll turn the knobs so that the rest of the eligible Firefox users get updated to E10S over the following weeks. If we run into issues, we can slow the roll-out, pause it, or even disable E10S for those who got it. We have all the knobs.

As noted earlier, this is just the first phase. Next up we’ll be working to get E10S to the cohorts not eligible in Firefox 48. We want 100% of our release users to benefit from this massive improvement. After that, we’ll be working on support for multiple content processes. With that foundation in place, the next projects are sandboxing for security, and isolating extensions into their own processes.

It’s an exciting time at Mozilla. E10S is the largest change we’ve ever made to Firefox and we hope you’ll help us get through this with as few surprises as possible. To help out, get on Beta and let us know what you find.

update: There is some confusion about what’s new here. I think I can clear that up. E10S has been in beta for some time. That’s not new. It was there for half of our beta users for the entire previous 6-weeks cycle. What’s new here is that we’ve just recently met all of our release criteria and we think we can take the feature from beta to release in the next 6 weeks. Now we’re down to one final cycle — assuming we don’t encounter any surprises. That’s where you all come in. Please help us test this upcoming Firefox 48 beta well so we have confidence when we get to the end of the beta cycle that E10S works well for everyone that gets it. Thanks.

22 Jun 21:43

Saving the EU with Solo Bass…

by Steve

…Well, probably not, but these tunes do form two halves of an instrumental reflection/meditation on the impending EU Referendum in the UK (where we get to vote on whether we want to be in or out of the EU) – it’s pretty clear to me that staying in makes by far the most sense – the ‘risks’ of leaving offer nothing of substance in the way of counter-balance, and the kind of utter pond-life that are the politicians fronting the Leave campaign would be enough to make me turn my back on a long-held ideology. It’s a full list of venal, grasping, compassionless careerists.

I’ve been discussing it and sharing articles relating to it on Facebook a LOT over the last few weeks, but today I decided to put some music in the mix. I don’t write word-based music, instead I write a soundtrack to thoughts, ideas, feelings, and conversations. So here, in two parts, is my contribution to that. Which is sure to sway enough voters to clinch this for the Remain vote, right? :)

Enjoy!

and part 2:

22 Jun 21:43

Aged to perfection

by tychay

Eight years ago, at a party, a friend asked me who I preferred for the Democratic nomination. I said, “Barack Obama.”

“What about Hillary?” she asked, clearly bummed.

“She’s great too, and if she wins, I’d be proud to cast my vote for her to become the first female President of the United States.”

Looks like I’ll get my chance.

I wonder what my friend is thinking right now. To all you die-hard Hillary-supporters: sure, it was an eight-plus year wait, but it’ll have been worth it.

About the media conspiracy theories

A facebook friend, who has studied, worked with and in the media for his entire career implied that the the AP breaking the story that Sec. Hillary Clinton won the primary proved that a news agency was “100% in the tank for someone named Clinton.” To which a few people, Bernie supporters, agreed and added their own conspiracy theories to boot:

Here was my response:

AP did it because they are a news agency wanted the scoop. They saw that because of Puerto Rico, it would be easy to get it. The first two names on the byline are people whose job it is to survey superdelegates for the AP which they’ve done continually (look them up in NEXIS). They probably found the 17 most likely people they knew could be pestered until they announced. As someone whose worked and studied the media, I’d think this would be obvious.

For at least two weeks now, Hillary Clinton’s campaign has been privately asking superdelegates to hold off on announcing until after NJ. Look at how they scheduled (what clearly would be a victory party) in NYC for Tuesday night and Wednesday (along with Obama scheduled to be in the same city on the same day). Clearly they’re not happy either. Unfortunately they’re 720 of them and they’re their own people not beholden to anyone in particular (as much as I find the superdelegate system detestable, that is the point of them).

The only thing the press is “100% in the tank for” is clicks, Facebook posts, and retweets. And it clearly worked for the AP this time.

After that, someone accused me of “making too much sense this election cycle.” So my response to them was:

My bad. Forget what I said. Bernie was on Market Street today, so clearly this is a ploy by the media industrial complex to make my primary vote not matter. 😉

(BTW, I’m with Her, but I’m really excited to see so many facebook friends feeling the Bern. I hope each and every one of them ignores the AP article and votes for their preferred candidate.)

Another friend said that they were sick because “CA voters… totally don’t matter to the AP.” My answer was:

You’ve lived in California your entire life. When has your primary vote ever “mattered” in the way you say above?

Besides, the biggest middle finger you can give them is to vote anyway. 🙂

Go vote. Don’t ever let anyone take away that which people lived and died to give you by telling you it doesn’t matter.

22 Jun 21:43

N3xt

by d

 What’s n3xt? Let me start by saying that if you’re looking for a 30-second answer, or a quick pitch, or some shortcut to a category (“it’s like Tinder for piano tuners!”), or something along those lines, you will probably be disappointed.

For new products or technologies there’s this pervasive idea that if people can’t figure out what your thing does in 30 seconds, then you’re toast. Done. Or that if you can’t explain why something is valuable in a single sentence you haven’t distilled the concept enough.

That’s true in some areas. There’s value in brevity, sure. But brevity can also be a straightjacket. It forces us to use shortcuts, makes us dependent on analogies and pre-existing categories. It creates a rigid set of constraints that limit the possible complexity of the message and subsequent discourse.

I understand that this is sort of breaking “the rules”, such as they are and that in doing so I might not be able to reach some people. That’s ok. I do this not for novelty or to ‘be original’ or whatever, but rather because I believe that complexity isn’t a vice and if we end up only communicating ideas that can fit in a bumper sticker it will become increasingly difficult to tackle the challenges that lay in front of us. I also believe that the extra effort and time will be worth it.

So, with that in mind…

Bits are fusing with atoms. Sensors and screens are proliferating, and along with them storage, networking and compute power.

Devices are becoming more specialized in both form and function.

And the world in which pixelated rectangles are the only pathway to our data will be gone.

In time, applications as we know them today will no longer exist. The Facebook News Feed is a great example of what the post-app world looks like in two dimensions.

In it we don’t see apps.

We see photos, messages, likes. Data plus context.

Information.

N3xt is software that runs on personal devices — phones, tablets, laptops, TVs, etc. It creates a personal mesh network that makes it easier to share data between devices, and it can expand outward to create circles of trust with peers connected not only based on geography or network topology but also real relationships between people. It can consume data from different sources and aggregate it in a way that can only be done client-side, creating context and connections across disparate services and data sources that would not be feasible otherwise. Leveraging context to show only the data that’s relevant allows us to break away from one-dimensional navigational mechanisms like lists or scrolling that become necessary when, unable to determine what is important, we just present a whole list of items that only might be.

Thanks for taking the time to read this far. Much more will be coming soon. In the meantime, you can follow me @diegodoval and @n3xtapp or send questions via email.

PS: www.whatsn3xt.com will be expanding. Not much going on there right now. I’ll talk about why in more detail in a future post.

(cross-posted to my medium.)


22 Jun 21:42

Rust (Programming Language) Hack & Learn Session #1

by Robert "Bob" Reyes
Rust is a new systems programming language developed by Mozilla. The Mozilla Philippines Community (MozillaPH) is inviting everyone to a meetup for open hacking and (co)learning of Rust (Programming Language). Rusties/Rusters of all skill levels are welcome. Please bring your laptop for this meetup. Never written a single line of Rust, but interested in the language? Don’t worry, most of us are just starting off. Already have a project? Just hack on it or find others interested in contributing. If… Read the rest
22 Jun 21:42

I Had An NL-Alert Yesterday – Some Technical Details

by Martin

nl-alertYesterday I was in the Netherlands for a day and by chance it was just the day when they had a trial run of their NL-Alert system at noon that broadcast a test message to all mobiles in the country. I was quite surprised I got the message with my non-Dutch CyanogenMod Android device so I had a quick look at the technical details to find out more.

Screenshot_2016-06-06-12-21-47.resizedThe first screenshot on the left shows the message as it was shown on my screen while I was sitting in the train. In addition I got an audible indication and another dialog box afterward that asked me if I wanted to continue receiving alerts. For a second I thought it might be a malfunction but I quickly discarded that thought as the message was in Dutch. Also I heard a discussion on a radio station in a café in Arnhem later on but couldn’t understand the details as it was in Dutch.

Anyway, the NL-Alert web site has some more information on the system for users but not much else from a technical point of view. If you don’t speak Dutch, use Google Translate to convert the web page into English. Finding technical background information on the system available in public is not quite easy. But eventually Google turned up this page for me. According to this, the system is based on 3GPP’s CMAS (Commercial Mobile Alert System) which uses the GSM, UMTS and LTE Cell Broadcast mechanism for quick message delivery.

cmas-settings-android.resizedCell Broadcast is something out of the 1990s and was used in the early days of GSM for a few applications such as displaying local area codes on the display that could be called for reduced rates or other things of interest to the general public. But network operators could never make money out of cell broadcast so there wasn’t a lot of use in most countries in the last decade. But then it was rediscovered by homeland security departments as being an ideal bearer for emergency messages to the public.

NL-Alert is the Dutch implementation of the EU-Alert system. Apart from the Netherlands, however, I’m not aware of any other country in Europe that has deployed such a system. A bit of a shame. In other parts of the world other similar systems are used such as the Earthquake and Tsunami Warning System (ETWS) in Japan for example.

So back to the question of why I received the alert in the first place since I’m not a dutch mobile network subscriber and I was not aware my smartphone even had a function to turn reception of such messages on or off. But it does, it’s in the “Wireless and Network” settings in the “Emergency Broadcast” section. Receiving emergency broadcasts is switched on by default which is why I got the message. As I am using CyanogenMod on my device the CMAS implementation must be straight from Google!

22 Jun 21:42

Hey Tableau Analyst. What does the Data Say?

by Kelly

I like a lot of charts.  I have nothing against any particular chart and have probably used and abused all of them at one time or other.  I'm not a stickler for rules and I don't demand best practices of myself or my charts.

I try to find the best possible way to SEE the results. That's in the context of the dashboard as a whole and mostly, most importantly, the chart chosen depends on the data's distribution.

Before I begin to analyze and investigate the data, I can't tell you what chart(s) I'll settle on for the dashboard.  And halfway through building a dashboard, many charts will get discarded or radically changed [I don't draw my dashboards out first either].

The example below shows how sometimes the chart you think would be perfect, may not be the best chart possible.  Fortunately, you're a Tableau analyst.  You have the tool that allows you to explore your data FAST. You don't have to decide on a chart and then put your data in it. It's ok to not know which chart will be right at the beginning of the process - it's a discovery process.


I hope that what you take away from this is the fact that none of these charts are right or wrong, but that this is the process you sometimes have to go through in order to get the best view of your results. In this case, the Small multiples worked because aside from the '18-34 year old males', the groups were so closely distributed that it was difficult to see trends, differences, patterns.


Charts
22 Jun 21:42

Steven Sinofsky, My Tablet Has Stickers

by Stowe Boyd

excerpt [emphasis mine]

Continue reading on Medium »

22 Jun 21:42

The long ride

by Josh Bernoff

I like cycling long distances. Right now I’m planning an 84-mile ride from Arlington, the suburb of Boston where I live, to Falmouth, Cape Cod on July 3. This ride, my bicycle, my body, my mind, and my health are all intimately connected. And when you get to the end of this post, you’ll see how they connect with you, … Continue reading The long ride →

The post The long ride appeared first on without bullshit.

22 Jun 21:42

Trajectory Book 2 Launch Day

by Rob Campbell

Reports are coming in that Trajectory Book 2 is landing on Kindles all over the planet…

Well, on Kindles owned by people named Justin at the very least (hi @justinmm2!). If you preordered, you should be getting a copy too. Book 1 is still on sale, so if you don’t have that, now’s a good time.

If you’re looking for the ePub version, I have some good news and some bad news. Good news first, there will be an ePub version! Bad news is, you’ll have to wait a bit longer. More on that in a later post.

Right now, I’m just happy this is finally out the door and on people’s readers. Let me know what you think. Drop me a review on Amazon and Goodreads. There’s even a Q&A thing on my Goodreads author page if you have burning questions you need answered. If you have actual questions about burning sensations, you should skip Goodreads and go straight to your doctor. There’s a cream for that.

I really hope you like it.

22 Jun 21:42

Improvements to Tabs and Video on Firefox Make Browsing Faster and Easier

by Nick Nguyen

In the recent past we have made several improvements to Firefox Accounts that make your browsing better. Those improvements include personalizing your account and the ability to sync passwords, bookmarks and other browsing data between your desktop and mobile devices.

Now, we are making it even easier to access synced tabs directly in your desktop Firefox browser. If you’re logged into your Firefox Account, you will see all open tabs from your smartphone or other computers within the sidebar. In the sidebar you can also search for specific tabs quickly and easily.

Do you have a powerful multi-processor computer? If so, Firefox now delivers an improved video experience when viewing videos on YouTube. This means videos will run smoother. You will also use less bandwidth and get improved battery life on your laptop.

And if you are an Android user, we have removed the icons that show up in the URL bar. This prevents unsecured sites from copying the images of legitimate sites to try to trick you into thinking the site is safe for sensitive information.

Lastly, we added to Firefox for Android a feature that lets you show or hide web fonts. This may reduce the amount of data required for browsing, an important factor for data-conscious users. We hope you enjoy these new features. Feel free to send us any feedback.

22 Jun 21:41

The Last Mile of Public Service Delivery

by mheadd
The most important thing about civic technology has nothing to do with technology at all. The term “civic technology” gets used a lot, and it may mean different things to different people. I think this highlights the fact that the work being done in this area is dynamic, growing, and evolving rapidly — all good […]
22 Jun 21:41

Canada’s spring of occupations

by Michal Rozworski

Welcome back to the first podcast episode after a two-month hiatus! This week, three guests talk about two significant occupations of public space that have happened in Canada in the interim: the Black Lives Matter occupation of police headquarters plaza in Toronto and the occupations of Indigenous and Northern Affairs offices across the country.

In this first half, I speak with journalist Desmond Cole about the Black Lives Matter occupation of the police headquarters plaza in Toronto. Activists took over the plaza for two weeks in March and April over continuing police brutality and lack of unaccountability. Desmond reported regularly from the camp and spent several nights there. Aside from his column in the Toronto Star and his other print and radio work, he is also currently writing a book about black history and black politics in Canada.

The second half features my conversation with two activists and organizers behind Occupy INAC in Regina, Robyn Pitawanakwat and Susana Deranger. Susana is a veteran of the long struggle for justice for Canada’s First Nations, an activist for over 40 years in Saskatchewan. Robyn is from a younger generation, though as the daughter of a long-time Indigenous activist, she too has deep roots in the same fight. The Colonialism No More camp has been up for 50 days in front of Indigenous and Northern Affairs office in Regina. It started as part of a wave of occupations of INAC offices across the country in response to the state of emergency in Attawapiskat over youth suicide.

22 Jun 21:17

Plantronics Voyager 5200 Series

by Volker Weber

ZZ0D147BB2

Plantronics upgrades its Voyager headset with better noise cancellation. The new 5200 series adds a fourth microphone, a new algorithm to discover and eliminate wind noise, and modifies the microphone boom to protect the microphones from wind. When I had to rely on a headset to work with dictation software, it turned out that the Voyager Legend was superior to the more convenient Voyager Edge. Sound is all important and the 5200 wants to improve outdoor use.

Voyager 5200 is available now, the 5200 UC variant becomes available soon and I am waiting for this model. The new 5200 UC replaces the BT300 dongle that came with previous UC models with the new BT600, that also completes the Voyager Focus UC headset that I love so much. It also includes this charging case, which is slightly larger than the Legend case:

ZZ06FA9474

The case recharges the headset when you put it away and it also holds the BT600 dongle when you remove it from your computer. While you can pair a Voyager headset directly with your PC, using the Plantronics dongle comes with a number of benefits, the most important one being that you don't have to reconfigure your software when the headset is off or not paired. BT600 is recognized as a sound device, not a Bluetooth device.

ZZ45C8D271

The new charging case also doubles as a desktop stand. You can just place the headset directly on the case, as it has a second charge connector that is accessible from the top.

The headset has a few minor modifications. Volume buttons moved to the top where the Voyager Pro had its volume control. Two distinct buttons are always easy to find with your finger tips. The Legend has a slider at the back which also worked well for me.

ZZ34204BF1

And more importantly, the 5200 series adds a MicroUSB port which was missing from the Legend. Even without the charging case you no longer need a little accessory to fit between the cable and the charging port. It now has both options: the quick magnetic connector for the case/stand and a MicroUSB port.

22 Jun 21:13

Advancing Drupal's web services

by Dries

In an earlier blog post, I looked at the web services solutions available in Drupal 8 and compared their strengths and weaknesses. That blog post was intended to help developers choose between different solutions when building Drupal 8 sites. In this blog post, I want to talk about how to advance Drupal's web services beyond Drupal 8.1 for the benefit of Drupal core contributors, module creators and technical decision-makers.

I believe it is really important to continue advancing Drupal's web services support. There are powerful market trends that oblige us to keep focused on this: integration with diverse systems having their own APIs, the proliferation of new devices, the expanding Internet of Things (IoT), and the widening adoption of JavaScript frameworks. All of these depend to some degree on robust web services.

Moreover, newer headless content-as-a-service solutions (e.g. Contentful, Prismic.io, Backand and CloudCMS) have entered the market and represent a widening interest in content repositories enabling more flexible content delivery. They provide content modeling tools, easy-to-use tools to construct REST APIs, and SDKs for different programming languages and client-side frameworks.

In my view, we need to do the following, which I summarize in each of the following sections: (1) facilitate a single robust REST module in core; (2) add functionality to help web services modules more easily query and manipulate Drupal's entity graph; (3) incorporate GraphQL and JSON API out of the box; and (4) add SDKs enabling easy integration with Drupal. Though I shared some of this in my DrupalCon New Orleans keynote, I wanted to provide more details in this blog post. I'm hoping to discuss this and revise it based on feedback from you.

One great REST module in core

While core REST can be enabled with only a few configuration changes, the full extent of possibilities in Drupal is only unlocked either when leveraging modules which add to or work alongside core REST's functionality, such as Services or RELAXed, or when augmenting core REST's capabilities with additional resources to interact with (by providing corresponding plugins) or using other custom code.

Having such disparate REST modules complicates the experience. These REST modules have overlapping or conflicting feature sets, which are shown in the following table.

Feature Core REST RELAXed Services Ideal core REST
Content entity CRUD Yes Yes Yes Yes
Configuration entity CRUD Create resource plugin (issue) Create resource plugin Yes Yes
Custom resources Create resource plugin Create resource plugin Create Services plugin Possible without code
Custom routes Create resource plugin or Views REST export (GET) Create resource plugin Configurable route prefixes Possible without code
Translations Not yet (issue) Yes Create Services plugin Yes
Revisions Create resource plugin Yes Create Services plugin Yes
File attachments Create resource plugin Yes Create Services plugin Yes
Authenticated user resources (log in/out, password reset) Not yet (issue) No User login and logout Yes

I would like to see a convergence where all of these can be achieved in Drupal core with minimal configuration and minimal code.

Working with Drupal's entity graph

Recently, a discussion at DrupalCon New Orleans with key contributors to the core REST modules, maintainers of important contributed web services modules, and external observers led to a proposed path forward for all of Drupal's web services.

Web services entity graph
A visual example of an entity graph in Drupal.

Buried inside Drupal is an "entity graph" over which different API approaches like traditional REST, JSON API, and GraphQL can be layered. These varied approaches all traverse and manipulate Drupal's entity graph, with differences solely in the syntax and features made possible by that syntax. Unlike core's REST API which only returns a single level (single entity or lists of entities), GraphQL and JSON API can return multiple levels of nested entities as the result of a single query. To better understand what this means, have a look at the GraphQL demo video I shared in my DrupalCon Barcelona keynote.

What we concluded at DrupalCon New Orleans is that Drupal's GraphQL and JSON API implementations require a substantial amount of custom code to traverse and manipulate Drupal's entity graph, that there was a lot of duplication in that code, and that there is an opportunity to provide more flexibility and simplicity. Therefore, it was agreed that we should first focus on building an "entity graph iterator" that can be reused by JSON API, GraphQL, and other modules.

This entity graph iterator would also enable manipulation of the graph, e.g. for aliasing fields in the graph or simplifying the structure. For example, the difference between Drupal's "base fields" and "configured fields" is irrelevant to an application developer using Drupal's web services API, but Drupal's responses leak this internal distinction by prefixing configured fields with field_ (see the left column in the table below). By the same token, all fields, even if they carry single values, expose the verbosity of Drupal's typed data system by being presented as arrays (see the left column in the table below). While there are both advantages and disadvantages to exposing single-value fields as arrays, many developers prefer more control over the output or the ability to opt into simpler outputs.

A good Drupal entity graph iterator would simplify the development of Drupal web service APIs, provide more flexibility over naming and structure, and eliminate duplicate code.

Current core REST (shortened response) Ideal core REST (shortened response)
{
  "nid": [
    {
      "value": "2"
    }
  ],
  "title": [
    {
      "value": "Lorem ipsum"
    }
  ],
  "field_product_number": [
    {
      "value": "35"
    }
  ],
  "field_image": [
    {
      "target_id": "2",
      "alt": "Image",
      "title": "Hover text",
      "width": "210",
      "height": "281",
      "url": "http://site.com/x.jpg"
    }
  ]
}
{
  "nid": "2"
  "title": "Lorem ipsum",
  "product_number": {
    "value": 35
  },
  "image": {
    "target_id": 2,
    "alt": "Image",
    "title": "Hover text",
    "width": 210,
    "height": 281,
    "url": "http://site.com/x.jpg"
  }
}

GraphQL and JSON API in core

We should acknowledge simultaneously that the wider JavaScript community is beginning to embrace different approaches, like JSON API and GraphQL, which both enable complex relational queries that require fewer requests between Drupal and the client (thanks to the ability to follow relationships, as mentioned in the section concerning the entity graph).

While both JSON API and GraphQL are preferred over traditional REST due to their ability to provide nested entity relationships, GraphQL goes a step further than JSON API by facilitating explicitly client-driven queries, in which the client dictates its data requirements.

I believe that GraphQL and JSON API in core would be a big win for those building decoupled applications with Drupal, and these modules can use existing foundations in Drupal 8 such as the Serialization module. Furthermore, Drupal's own built-in JavaScript-driven UIs could benefit tremendously from GraphQL and JSON API. I'd love to see them in core rather than as contributed modules, as we could leverage them when building decoupled applications backed by Drupal or exchanging data with other server-side implementations. We could also "eat our own dog food" by using them to power JavaScript-driven UIs for block placement, media management, and other administrative interfaces. I can even see a future where Views and GraphQL are closely integrated.

Web services rest json grapql
A comparison of different API approaches for Drupal 8, with amended and simplified payloads for illustrative purposes.

SDKs to consume web services

While a unified REST API and support for GraphQL and JSON API would dramatically improve Drupal as a web services back end, we need to be attentive to the needs of consumers of those web services as well by providing SDKs and helper libraries for developers new to Drupal.

An SDK could make it easy to retrieve an article node, modify a field, and send it back without having to learn the details of Drupal's particular REST API implementation or the structure of Drupal's underlying data storage. For example, this would allow front-end developers to not have to deal with the details of single- versus multi-value fields, optional vs required fields, validation errors, and so on. As an additional example, incorporating user account creation and password change requests into decoupled applications would empower front-end developers building these forms on a decoupled front end such that they would not need to know anything about how Drupal performs user authentication.

As starting points for JavaScript applications, native mobile applications, and even other back-end applications, these SDKs could handle authenticating against the API and juggling of the correct routes to resources without the front-end developer needing an understanding of those nuances.

In fact, at Acquia we're now in the early stages of building the first of several SDKs for consuming and manipulating data via Drupal 8's REST API. Waterwheel (previously Hydrant), a new generic helper library intended for JavaScript developers building applications backed by Drupal, is the work of Acquia's Matt Grill and Preston So, and it is already seeing community contributions. We're eager to share our work more widely and welcome new contributors.

Conclusion

I believe that it is important to have first-class web services in Drupal out of the box in order to enable top-notch APIs and continue our evolution to become API-first.

In parallel with our ongoing work on shoring up our REST module in core, we should provide the underpinnings for even richer web services solutions in the future. With reusable helper functionality that operates on Drupal's entity graph available in core, we open the door to GraphQL, JSON API, and even our current core REST implementation eventually relying on the same robust foundation. Both GraphQL and JSON API could also be promising modules in core. Last but not least, SDKs like Hydrant that empower developers to work with Drupal without learning its complexities will further advance our web services.

Collectively, these tracks of work will make Drupal uniquely compelling for application developers within our own community and well beyond.

Special thanks to Preston So for contributions to this blog post and to Moshe Weitzman, Kyle Browning, Kris Vanderwater, Wim Leers, Sebastian Siemssen, Tim Millwood, Ted Bowman, and Mateu Aguiló Bosch for their feedback during its writing.

22 Jun 21:13

Gotthard tunnel website using Drupal

by Dries

The Gotthard Base Tunnel, under construction for the last 17 years, was officially opened last week. This is the world's longest and deepest railroad tunnel, spanning 57 kilometers from Erstfeld to Bolio, Switzerland, underneath the Swiss Alps. To celebrate its opening, Switzerland also launched a multi-lingual multimedia website to celebrate the project's completion. I was excited to see they chose to build their site on Drupal 8! The site is a fitting digital tribute to an incredible project and launch event. Congratulations to the Gotthard Base Tunnel team!

Gottardo

22 Jun 21:12

Google’s AlphaGo donation will help Code Club grow

by Clare Sutcliffe

In March, Google DeepMind’s Artificial Intelligence AlphaGo won an historic five-game challenge match against Lee Sedol, the best Go player of the past decade. DeepMind said that they would be donating the grand prize of $1 million US to charities and non-profits. This afternoon, co-founder and CEO Demis Hassabis announced that Code Club will receive $100 000 of AlphaGo’s prize!

Demis Hassabis on Twitter

Pleased to confirm the recipients of the #AlphaGo $1m prize! @UNICEF_uk, @CodeClub, and the American, European and Korean Go associations

We’re absolutely delighted that Google DeepMind has chosen to make this generous donation to Code Club, and we’ll use the prize money to help even more children in the UK and worldwide to get into computing and coding.

Children at a Code Club

Happy tech enthusiasts at a Code Club

Read more on Code Club’s blog, and if you’d like to support our work too, visit our websites to find out how you can volunteer in the UK and around the world.

The post Google’s AlphaGo donation will help Code Club grow appeared first on Raspberry Pi.

22 Jun 21:05

Former NYT Chief Explains Japan’s Lack of Journalistic Freedom

by Louis Krauss

Martin Fackler, former Tokyo bureau chief of the New York Times, gave a lecture high above Tokyo at Academy Hills in Roppongi last Thursday, primarily about why Japan’s quality and freedom of the press has regressed from a world ranking of 11 to 72 since 2010 (Press Freedom Index). Filling the auditorium were around 100 mostly-Japanese journalists, students and teachers interested in getting the outsider insight on why Japanese news sources are lacking. In Fackler’s opinion, the mainstream media and their reporters have been molded by Abe’s government into thinking they are getting special access to all the country’s biggest topics, when really it’s diluting the quality of their stories. Whereas in the U.S., major sources like New York Times and BBC that are unafraid to feature public opinions that may oppose the government, Fackler says Japan’s media culture has taken the importance of so-called “access journalism” too far.

“It seems like it’s all about getting the scoop in mainstream Japan media,” Fackler said. “It’s not just the atmosphere, it’s how people’s’ careers are made, by getting the scoop to the officials’ breaking access news.”

While also praising Abe’s cabinet for their savvy when it comes to promptly addressing the media, he points out that the desire for a scoop has made newspapers such as Asahi and Mainichi regress into a routine of rarely questioning or opposing officials in their writing, since doing so would cost them their special access rights.   

“Abe gives out a lot of scoops and one-on-one interviews with cooperative media members, and even has dinner with them,” Fackler said. “So if you play ball, you get a lot of access. That’s why there are a lot of ‘scoops’ in the yomiurii shimbun, sankei shimbun.”

Nowadays in Japan’s newsrooms, getting ‘scoops’ is overvalued in place of more quality journalism, i.e. going out into the field and reaching out to the officials individually in order to not get a pre-canned speech that everyone accepts as true.

Fackler also pointed to press clubs as a culprit for dumbing down Japanese reporting, something that rang true with me.  When I started reporting for JSRC in Summer 2015, I noticed that a lot of the foreigners in the FCCJ rarely left the workroom to write their stories. For me, this was radically different than experiences in my hometown Baltimore, where a protest by locals in the rough parts of town was often a bigger story than the original government plan they were protesting.

“All the reporters sit there and wait for the officials to bring them news,” Fackler said. “These clubs were originally intended as a way to keep a close eye on the government, but now what they’ve become is a machine to create a very passive type of journalism. It’s not just the facts that the officials parlay, it’s also the stories, narratives and how to understand it all.”

Fackler’s opening story and example of this new “passive journalism,” was his experience initially reporting on the Fukushima disaster in March 2011. Minamisoma City Mayor Katsunobu Sakurai sent out an S.O.S. for help and almost no local reporters came in person to talk to him, Fackler being one of the few who didn’t opt to run for the hills on March 13th, one day after the disaster.

“I’ll never forget the reaction, because everyone at the front desk excitedly murmured: ‘kisha ga kimashita!’ (a reporter came!). Sakurai said the reason there was so much commotion was that all the Japanese reporters had left. So while all the mainstream papers were saying ‘The government has everything under control, don’t exaggerate the risks, etc.’, those reporters were themselves running for the hills,” Fackler said. “It became a lesson to me of how the media here has a tendency to repeat the official mindset, even if they believe differently, and that there’s a pressure to not deviate from the official narrative.”  

In my opinion, much of what the former correspondent listed as the deep issues with Japan’s newspapers are easily fixable, it just takes some sacrifices with losing favor in the government’s circle of media it likes. I appreciated how Mr. Fackler took notice of NHK broadcasts constantly being focused around an official or prosecutor’s good work, never a simple salaryman or artist. Newspapers are just as much to blame for Japan’s fall in press freedom as Abe and his regime, if not more because of the power I believe news articles can have in rallying people to pressure the government into doing what’s best. Instead, we see newspaper writers falling into line like they are simply mere kohai (second rank) to the government that tells them what’s important.

“I think the Abe govt. has raised the bar with dealing with press, but the problem is the big media haven’t followed. They’re still in this post-war heiwagyou case thing, where they’re so used to having stories fed to them, that when the government roughs them up a bit, they’re like dogs and roll over on their backs. I don’t think they know or have had a good fight with the government in a really long time.”

22 Jun 03:29

The Wisdom of James Mickens

by Rui Carmo

The guy’s a genius. A while back he was the only reason I even read USENIX, and I can’t think of a better way to start the week than go over “This World Of Ours” again.