Shared posts

26 Aug 00:19

Jorge Ramos is the most trusted name in Latino news. Donald Trump bounced him from a press conference.

by Dara Lind

Donald Trump doesn't like people who criticize him, as a general rule. Donald Trump also does not like Univision — he's suing the Spanish-language news network for $500 million after it dropped coverage of his Miss Universe pageant.

So when Univision journalist Jorge Ramos — the most trusted name in Latino news — asked a question at a Trump press conference without getting called on, Trump had his security detail bounce Ramos from the press conference, shouting "Go back to Univision!"

WATCH: Trump kicks Univision's Jorge Ramos out of press conference. https://t.co/Ej9r3xIBsz

— ¡Gabe! Ortíz (@TUSK81) August 25, 2015

Trump eventually relented and let Ramos back into the press conference. The two men then had an extended back-and-forth over the issues of immigration and the Latino vote, which Ramos alleged Trump might lose for the GOP — and to which Trump replied with this:

Trump: "Hispanics love me; they love me. You know what they want? They want jobs."

— Roger Simon (@politicoroger) August 25, 2015

This is a chart from a Gallup poll from earlier this week:

Jorge Ramos, by contrast, made Time's 2015 list of the most 100 influential people, and is generally considered to be the most authoritative newscaster on Spanish-language television.

Ramos isn't an impartial journalist. He sees his role as championing Latino voters and immigrant rights, and he challenges members of both parties on their immigration records and rhetoric. So it's not surprising that he confronted Trump. But Trump's first response, rather than answering the questions or simply ignoring him, was to kick Ramos out.

24 Aug 17:44

The gravity of China’s great fall

lbstopher

Great fall of China--Nice!

ASIAN markets are once again driving traders batty. A mammoth 8.5% plunge in China's stockmarkets on Monday, August 24th, touched off a wild day on global markets: in which Japanese and European stocks plummeted (as did American shares, before staging a remarkable turnaround) prompting commentators to liken the situation to previous crises from the Wall Street crash of 1929 to the Asian financial crisis of 1997. A day later, the Shanghai Composite Index continued its plummet, falling a further 7.6% as the market closed on August 25th.

Asian share prices have had a brutal summer. China deserves much of the blame. Its own market has crashed (falling by over 40% from its peak, and losing all the ground gained in 2015) amid worries about the pace of China's economic slowdown. Slackening Chinese demand for goods and commodities would represent a...Continue reading

24 Aug 20:11

Photoshop Of The Day

by Joe Jervis

Mediaite reports: “Donald Trump held a sizable, though not stadium-filling, rally in Mobile, Alabama on Friday. Nothing too exciting happened, at least until the Internet got its grubby mitts on a photo taken by Mark Wallheiser for Getty Images. The result was a barrage of Photoshop masterpieces and fails that, taken together, made for a rather entertaining farce. First on the docket is perhaps the best entry [above], courtesy of graphic artist Tom Adelsbach. Like most Photoshop edits, Adlesbach’s image is both hilariously amazing and gut-wrenchingly horrifying.” See more takes on the image at the link or create one of your own. (I know some of you have mad Photoshop skills.)
TrumpPhotoshop

The post Photoshop Of The Day appeared first on Joe.My.God..

19 Aug 20:00

Straight Teen Discovers Best Friend Is Gay In New Short From Coca-Cola

by Dan Tracer

Screen Shot 2015-08-19 at 12.47.36 PMAs part of Coca-Cola’s True Friendship campaign, the beverage giant teamed up with Oscar-winning screenwriter Dustin Lance Black (Milk, J. Edgar) to produce “El SMS (The Text),” a short film about two teen best friends.

From the film’s description:

The campaign, by Pereira & O’Dell, focuses on various different scenarios which question what makes a true friend (#VerdaderoAmigo). In this eight-minute film, a group of teen boys joke and tease each other in the usual high school way about girls, video games and such-like. Two of the boys are especially close buddies, but Rafael is hiding something from his friend Diego– he’s gay. When he leaves his phone unattended, Diego reads a text that surprises him. His decision to cover for his friend, and accept him for who he is, is what cements this true friendship. And, as Rafael points out, gay guys know all the pretty girls.

Black said in a statement: “‘The Text’ was perhaps the most personal for me to direct. As an artist, I feel I have a responsibility to share the stories of who LGBT people truly are in order to dispel any atmosphere of fear that might prevent LGBT people from sharing their lives openly.” He added, “For Coca-Cola to take a pro-diversity, pro-equality stance creates a lot of goodwill in the LGBT community.”

Watch below, and hit ‘CC’ to access English subtitles:

h/t GailyGrind

24 Aug 12:31

How Google convinced China's Communist Party to love science fiction

by Ezra Klein

This is a fascinating story from author Neil Gaiman:

I was in China in 2007, and it was the first ever state-sponsored, Party-approved science-fiction convention. They brought in some people from the west and I was one of them, and I was talking to a number of the older science-fiction writers in China, who told me about how science fiction was not just looked down on, but seen as suspicious and counter-revolutionary, because you could write a story set in a giant ant colony in the future, when people were becoming ants, but nobody was quite sure: was this really a commentary on the state? As such, it was very, very dodgy.

I took aside one of the Party organisers, and said, "OK. Why are you now in 2007 endorsing a science-fiction convention?" And his reply was that the Party had been concerned that while China historically has been a culture of magical and radical invention, right now, they weren’t inventing things. They were making things incredibly well but they weren’t inventing. And they’d gone to America and interviewed the people at Google and Apple and Microsoft, and talked to the inventors, and discovered that in each case, when young, they’d read science fiction. That was why the Chinese had decided that they were going to officially now approve of science fiction and fantasy.

The anecdote comes from a wonderful conversation Gaimain had with Kazuo Ishiguro about genre fiction. It's very much worth reading in full.

24 Aug 12:06

PHILADELPHIA: Cop Caught On Video Extorting Driver, Mocking His “Faggot-Ass” Windshield Wipers

by Joe Jervis

Philly.com reports:

“Either you buy these or I take your car, ’cause it’s unregistered.” Officer Matthew Zagursky didn’t mince words Thursday when he flashed tickets to the Hero Thrill Show, a fundraiser that supports the families of fallen officers and firefighters, to two men after pulling them over. And yesterday, hours after a recording of that exchange went viral, Commissioner Charles Ramsey didn’t either. “No part of that video is good,” Ramsey said of the footage, which also shows Zagursky spouting homophobic slurs. “It’s just bad all the way around.” Ramsey said Zagursky, 32, has been pulled from North Philly’s 24th District and placed on administrative duty as the Internal Affairs Bureau investigates the video, posted to Facebook by a user named “Rob Stay Faded.” In the video, the driver gives Zagursky $30 for two tickets to the Hero Thrill Show, an annual fundraiser that pays the college tuition of the children of police officers and firefighters killed in the line of duty. After the money changes hands, the officer jokes with the driver, asking him about his “faggot ass wipers,” referring to the pink-tinted windshield wipers on his car. When the driver tells Zagursky they’re in solidarity with his grandmother, a breast-cancer survivor, the officer tells him he looks like “a fruitcake.”

Philadelphia police claim there is no “internal pressure” for officers to sell tickets to the benefit.

The post PHILADELPHIA: Cop Caught On Video Extorting Driver, Mocking His “Faggot-Ass” Windshield Wipers appeared first on Joe.My.God..

21 Aug 11:00

VIRAL VIDEO: Real Life First-Person Shooter ChatRoulette

by Joe Jervis

Eurogamer reports:

If you log on to Chatroulette, you may be in for a big surprise. No, not that sort of surprise. Instead, how about an invitation to a live-action first-person shooter where you are in control? Escape the Room company Red House Mysteries, along with help from neighbours, local residents and cosplay artists, managed to pull off such a feat, then invited unsuspecting members of the public to try and navigate their creation. The video below shows how impressive it looks when everything comes together, while the making of is also worth a watch to see how much work went into syncing all the different parts (audio, visual effects, actors, props) for the live experience. It also goes to prove that there are normal people out there on Chatroulette. We’ve just never found any.

1.3M views overnight.

The post VIRAL VIDEO: Real Life First-Person Shooter ChatRoulette appeared first on Joe.My.God..

21 Aug 11:19

Patrick Stewart Does Reddit AMA

by Joe Jervis

Yesterday Patrick Stewart took part in one of Reddit’s “Ask Me Anything” sessions. Via E Online:

Patrick Stewart is kind of like the Hollywood version of a unicorn. He’s a very rare combination of brilliant, funny, down to earth and most importantly, royal. Whenever you have the pleasure to be in the presence of Sir Patrick, you should count yourself as lucky. Which is why we knew that his Reddit AMA was going to be good. After all, his Twitter account is something of a masterpiece, and conducting an Ask Me Anything is basically one giant Twitter post. The actor signed up for his first Reddit foray to promote the premiere of his new comedy Blunt Talk, which sees him teaming up with fellow comedian Seth McFarlane. On whether Star Trek actor Jonathan Frakes is better with a beard: “With a beard, preferable, because it tickles when you kiss him.” On his worst habit: “Slurping when I drink…anything.” On hobbies he’d like to pick up: “Yes! Deep-sea diving and mountaineering. There’s something about going up and down that turns me on.”

The banter between Reddit users is also entertaining.

The post Patrick Stewart Does Reddit AMA appeared first on Joe.My.God..

20 Aug 09:56

Jeb Bush and the New York Times: Can You Call Someone Who Thinks He Can Get 4.0 Percent GDP Growth "Wonky?"

by dean.baker1@verizon.net (Dean Baker)

That's what millions of people are asking after reading a NYT article contrasting the "bombastic" Donald Trump to Jeb Bush who is described as "the wonky son of a president." Bush has repeatedly said that he can generate 4.0 percent GDP growth during a Bush presidency.

The baseline projection for the years 2017 though 2025 from the Congressional Budget Office is 2.1 percent. Raising this to 3.0 percent would be a remarkable accomplishment. There is no remotely plausible story that would raise growth to 4.0 percent. It would be sort of like predicting a baseball team going undefeated through 162 game season. It would be difficult to take seriously a team manager who confidently made such predictions. The same should apply to a presidential candidate boasting of 4.0 percent GDP growth.

19 Aug 05:01

Proof cats outcompete dogs? [Life Lines]

by Dr. Dolittle

File:Can e gato, Santiago de Compostela.jpg

By Noel Feans (originally posted to Flickr as Watch your back!) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

New research suggests that cats may have played a role in the extinction of about 40 species of wild dogs by simply out-hunting them and therefore consuming more food.

The study noted that dogs first appeared in North America around 40 million years ago and by 22 million years ago there were over 30 species of wild dogs. Cats arrived from Asia around 20 million years after dogs appeared. The arrival of cats was followed by decreased diversity of dogs. Dr. Daniele Silvestro was quoted in a press release from the University of Gothenburg saying, “We usually expect climate changes to play an overwhelming role in the evolution of biodiversity. Instead, competition among different carnivore species proved to be even more important for canids.” According to the study, only 9 species of wild dogs are found in North America today.

Obtaining food is a major contributor to the evolutionary success of carnivores. According to a quote from Dr. Silvestro in the Oregonian, “The cats have retractable claws which they only pull out when they catch their prey. This means they don’t wear them out and they can keep them sharp. But the dogs can’t do this, so they are at a disadvantage to the cats in an ambush situation.”

Sources:

University of Gothenburg press release

Silvestro D, Antonelli A, Salamin N, Quental T B: The role of clade competition in the diversification of North American canids. Proceedings of the National Academy of Science USA (2015). The full article is available free of charge at the following link: http://www.pnas.org/content/early/2015/06/23/1502803112.abstract

The Oregonian

19 Aug 17:09

Textbooks are a bargain

by noreply@blogger.com (Greg Mankiw)
Compared with Harvard tuition, that is, according to Irwin Collier:
Excerpts from the Harvard Catalogue for 1874-75 with principal texts.... Incidentally, one finds that annual fees for a full course load at Harvard ran $120/year and a copy of John Stuart Mill’s Principles cost $2.50. Cf. today’s Amazon.com price for N. Gregory Mankiw’s Economics which is $284.16. If tuition relative to the price of textbooks had remained unchanged (and the quality change of the Mankiw textbook relative to Mill’s textbook(!) were equal to the quality change of the Harvard undergraduate education today compared to that of 1874-75(!!)), Harvard tuition would only be about $13,600/year today instead of $45,278.

In other words, over the past 140 years, textbook prices have risen only 114-fold, whereas Harvard tuition has risen 377-fold. 

Over this period, the CPI has risen 22-fold. So the real price of textbooks has increased about 5-fold, or a bit more than 1 percent per year.

19 Aug 22:06

GAWKER: Josh Duggar Had Two Ashley Madison Accounts

by Joe Jervis

Holy Family Duggar Research Duggar Council Duggar! Gawker has found TWO paid Ashley Madison accounts for Josh Duggar among the millions of user profiles released today by hackers. They write:

Josh himself took to his family’s Facebook page to absolve himself of his past indiscretions and assure the world he was back on a righteous path. But data released online in the wake of the hack on Ashley Madison’s servers certainly seems to show otherwise. Someone using a credit card belonging to a Joshua J. Duggar, with a billing address that matches the home in Fayetteville, Arkansas owned by his grandmother Mary—a home that was consistently shown on their now-cancelled TV show, and in which Anna Duggar gave birth to her first child—paid a total of $986.76 for two different monthly Ashley Madison subscriptions from February of 2013 until May of 2015.

Here are some of Josh Duggar’s turn-ons in the name of Jesus.JDash1And this is what he’s looking for in an adulterous lover in the name of Jesus.
JDash2Probably like many of you, in general I’m not crazy about the Ashley Madison hacking. But unlike Gawker‘s recent outing of a previously unknown executive, the festering stink of hypocrisy makes THIS story worthwhile.

The post GAWKER: Josh Duggar Had Two Ashley Madison Accounts appeared first on Joe.My.God..

19 Aug 12:00

I’m Latino. I’m Hispanic. And they’re different, so I drew a comic to explain.

by Terry Blas
terry blas comics latino
terry blas comics latino
terry blas comics latino
terry blas comics latino
terry blas comics latino
terry blas comics latino
terry blas comics latino
terry blas comics latino


Terry Blas is a writer/cartoonist and creator of the web series Briar Hollow. He is a member of Portland, Oregon's Periscope Studio, a powerhouse collective of more than two dozen award-winning creatives. Follow him on Twitter @terryblas and on Tumblr at terryblas.tumblr.com.


First Person is Vox's home for compelling, provocative narrative essays. Do you have a story to share? Read our submission guidelines, and pitch us at firstperson@vox.com.


14 Aug 15:25

Seriously? Payment for citations?

by drugmonkey
lbstopher

Product placement in Science?

A Reader submitted this gem of a spam email:

We are giving away $100 or more in rewards for citing us in your publication! Earn $100 or more based on the journal’s impact factor (IF). This voucher can be redeemed your next order at [Company] and can be used in conjunction with our ongoing promotions!

How do we determine your reward?
If you published a paper in Science (IF = 30) and cite [Company], you will be entitled to a voucher with a face value of $3,000 upon notification of the publication (PMID).

This is a new one on me.

14 Aug 17:00

Also Forgot His Nuts

by BD
lbstopher

Vaudeville humor still shines.

Grocery Store | Los Angeles, CA, USA

(I go to the store to get bananas, and nothing else. I pay for the bananas, and start to walk away, forgetting them at the register.)

Cashier: *holds bananas up and calls to me* “Hey! Your bananas!”

Me: “That’s between me and my psychiatrist, thank you very much!”

(We all have a good chuckle as I return for the bananas.)

13 Aug 13:37

This semi-autonomous stroller from VW is a great idea

by Noah Joseph
lbstopher

Video is amazing.

Filed under: Marketing/Advertising, Videos, Volkswagen, Europe, Technology, Autonomous

Volkswagen's office in the Netherlands responded to popular demand on social media with this video showing a supposedly semi-autonomous baby stroller.

Continue reading This semi-autonomous stroller from VW is a great idea

This semi-autonomous stroller from VW is a great idea originally appeared on Autoblog on Thu, 13 Aug 2015 09:29:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments
12 Aug 14:00

Brain imaging research is often wrong. This researcher wants to change that.

by Julia Belluz
lbstopher

They need a Center for Reproducible Neuroscience because most if it isn't. OUCH.

When neuroscientists stuck a dead salmon in an fMRI machine and watched its brain light up, they knew they had a problem. It wasn't that there was a dead fish in their expensive imaging machine; they'd put it there on purpose, after all. It was that the medical device seemed to be giving these researchers impossible results. Dead fish should not have active brains.

california fihs

The lit of brain of a dead salmon — a cautionary neuroscience tale. (University of California Santa Barbara research poster)

The researchers shared their findings in 2009 as a cautionary tale: If you don't run the proper statistical tests on your neuroscience data, you can come up with any number of implausible conclusions — even emotional reactions from a dead fish.

In the 1990s, neuroscientists started using the massive, round fMRI (or functional magnetic resonance imaging) machines to peer into their subjects' brains. But since then, the field has suffered from a rash of false positive results and studies that lack enough statistical power — the likelihood of finding a real result when it exists — to deliver insights about the brain.

When other scientists try to reproduce the results of original studies, they too often fail. Without better methods, it'll be difficult to develop new treatments for brain disorders and diseases like Alzheimer's and depression — let alone learn anything useful about our most mysterious organ.

To address the problem, the Laura and John Arnold Foundation just announced a $3.8 million grant to Stanford University to establish the Center for Reproducible Neuroscience. The aim of the center is to clean up the house of neuroscience and improve transparency and the reliability of research. On the occasion, we spoke to Russ Poldrack, director of the center, about what he thinks are neuroscience's biggest problems and how the center will tackle them.

Julia Belluz: The field of neuroscience seems to have a particular problem with irreproducible results — or studies that fail when researchers try to repeat them. What's going on?

Russ Poldrack: I think there are some parts of neuroscience, like neuroimaging, that have a number of features that make it easier for practices to happen that can drive irreproducible findings.

When we do brain imaging, we're collecting data from 200,000 little spots in the brain, which creates a lot of leeway for false positives, bias, and false negatives. If you don't do the proper corrections — to address the fact that you’re doing a statistical test in each of those places — it's very easy to find a highly significant result [that's not actually real].

A group of researchers a few years ago tried to illustrate this problem by putting a dead salmon in an MRI scanner. When they analyzed data without doing proper corrections, you could find activation in the dead salmon's brain. They were trying to say you should do the [necessary statistical tests] or you can find activation pretty much anywhere.

The field also suffers from generally underpowered studies, especially in neuroimaging. It can cost up to $1,000 for each person we scan. When I started doing [brain scan studies] about 20 years ago, most studies had about eight subjects. Now no one would publish that. We all realize it’s way too underpowered.

The number of subjects in the average study has been going up — now in the order of about 20 to 30. For some analyses, that's reasonably powered, and for others, it's way underpowered.

JB: Wasn't there also just a real seduction in having fMRI machines that allowed scientists to watch the brain at work?

mri

The amazing fMRI machine — its results need to be interpreted with caution. (Levent Konuk/Shutterstock)

RP: The fMRI has only been around a little over 20 years, and it started to take off in the last 10 to 15 years as a technique lots of people are using. It is really seductive: You see someone’s brain doing something while they’re doing a task.

The bigger problem, however, is simply that most of the people in this field had gotten trained to do statistics on much smaller data sets or different types of data sets that we use [today].

For example, there was some prominent work a few years ago about social pain. The idea was that when people experience social rejection, the pattern in their brain looks like it does when they are experiencing physical pain. That got a lot of play — but in the last couple of years, we realized the brain patterns for social pain and physical pain are really distinct.

JB: What will the center do to address these problems?

RP: Our goal is to build an online data analysis platform that people can use that will help them do the right thing. In our case, doing the right thing means doing the data analyses properly.

In part, the reason it’s difficult to do analyses properly is that a lot of people don't have the computational tools. In some cases, these analyses require more computing power than most people have on their desktop. We want to take advantage of high-performance computing systems to allow them to do analyses that would be way too big on their local machine.

Our hope is also that these free, powerful, and innovative computing tools will be an incentive that can get people to share their data so that others can try to reproduce their results or use the data to ask different questions than the original investigators may have been interested in.

This is part of a bigger movement going on across science for openness and transparency and reproducibility. And I think there are people across a lot of different subfields of science who have come to realize that if we don't get this right, the public is not going to continue to fund research.

12 Aug 19:36

CHINA: Hundreds Injured After Massive Industrial Explosion Rocks Northern City Of Tianjin [VIDEO]

by Joe Jervis

The BBC reports:

A massive explosion has hit China’s northern port city of Tianjin, reportedly injuring many people. According to Chinese state media, the explosion occurred when a shipment of explosives blew up at about 23:30 (16:30 GMT). Pictures and video shared on social media showed flames lighting up the sky and damage to nearby buildings. Latest reports in state media suggest that hundreds of people have been taken to hospital. Xinhua news agency said a fire started by the explosion was “under control” but said two firefighters were missing. Shockwaves from the blast could apparently be felt several kilometres away from Tianjin.

Tianjin is China’s fourth-largest city with a metro population of about 15 million. The explosion’s mushroom cloud was reportedly visible for 50 miles.

The post CHINA: Hundreds Injured After Massive Industrial Explosion Rocks Northern City Of Tianjin [VIDEO] appeared first on Joe.My.God..

12 Aug 13:00

I Smell Immaturity

by BD

1d6f0a99a3ec8021e469d8ab0670d503

11 Aug 07:01

The Price is Right winner and cancer survivor calculates the odds

by Nathan Yau

Elisa Long, a professor in Decisions, Operations, and Technology Management at the University of California, Los Angeles, was diagnosed with breast cancer. The Price is Right films a breast cancer awareness episode every August. Long wanted to get on that show. So she watched episodes during her 6-hour chemotherapy sessions to familiarize herself with games and rules, and most importantly, to maximize her odds of winning.

Long describes her thought process and probability calculations on her way to surviving cancer and winning it all on The Price is Right.

My goal in going on "The Price Is Right" was to play the best I possibly could given tremendous uncertainty about the outcome. The same was true for my breast cancer. The stakes were just higher.

Ah, the uncertainty of life.

Tags: cancer, game theory, probability, survival, The Price is Right

07 Aug 12:26

Frank Bruni: Hooray For Fox News

by Joe Jervis
"This wasn’t a debate, at least not like most of those I’ve seen. This was an inquisition. On Thursday night in Cleveland, the Fox News moderators did what only Fox News moderators could have done, because the representatives of any other network would have been accused of pro-Democratic partisanship. They took each of the 10 Republicans onstage to task. They held each of them to account. They made each address the most prominent blemishes on his record, the most profound apprehensions that voters feel about him, the greatest vulnerability that he has. It was riveting. It was admirable. It compels me to write a cluster of words I never imagined writing: hooray for Fox News." - Frank Bruni, writing for the New York Times. (Read the full piece.)
05 Aug 11:30

Unlimited vacation is a silly Silicon Valley trend that just won't die

by Dylan Matthews
lbstopher

My tech company just switched to unlimited vacation for senior people. I don't think it will change the # of days I take.

Big news out of Netflix:

UPDATE: Netflix announces unlimited paid paternity and maternity leave for its employees during the 1st year after birth, adoption. • $NFLX

— CNBC Now (@CNBCnow) August 4, 2015

On the one hand, this is a great deal more than most companies do for new parents. The US, basically alone among developed nations, doesn't mandate that companies provide paid family leave at all, and while in theory companies are required to allow 12 weeks of unpaid leave, exemptions to that rule mean that it applies to only about 40 percent of workers. In 2014, less than one in seven workers had access to paid leave. Companies that buck that trend and provide months of paid leave are to be commended.

But Netflix is also engaging in a trend that's at best silly and at worst actively harmful to workers: the idea of "unlimited" leave or vacation.

Leave and vacation will never be truly unlimited

Just when I thought I was out…

Shutterstock

Even on the lake the laptop will pull you back in.

Obviously no leave is truly unlimited. That's what it means to have a job. So in practice there are limits. Netflix, for example, gives its supposedly "unlimited" benefits "during the 1st year." Giving employees a full year off is good! But it's not what the word "unlimited" means.

It's also not the same thing as Netflix saying explicitly, "New parents aren't supposed to work the first year after the birth or adoption of their child." It's still optional. While you could, in theory, take the whole year off, are Netflix employees actually going to do that? Or are they going to feel pressure to get back on the job sooner, lest they fall behind their childless colleagues?

This is the basic problem with "unlimited" leave: It replaces clear, predictable limits with limits imposed by vague and arbitrary social pressure to work more. That's what the German tech company Travis CI found upon adopting an unlimited policy (via Melissa Dahl):

When people are uncertain about how many days it's okay to take off, you'll see curious things happen. People will hesitate to take a vacation as they don't want to seem like that person who's taking the most vacation days. It's a race to the bottom instead of a race towards a well rested and happy team…

Uncertainty about how many days are okay to take time off can also stir inequality. It can turn into a privilege for some people who may be more aggressive in taking vacations compared to people who feel like their work and their appreciation at work would suffer from being away for too long.

Some companies have noticed that people are too afraid to take advantage of unlimited leave. Evernote, the note-taking app, started giving workers $1,000 to take at least a week off. But when this was announced, its CEO hadn't vacationed in years. Is risking his, or other executives', approval really worth an extra $1,000 to workers? Some other places, like Travis CI, started a minimum vacation policy. That's far better, especially when it's generous; Travis CI mandates at least five weeks off. But a low minimum could have the same problem as a regular unlimited policy. If you're required to take one week off, and more is optional, are your bosses really not going to think less of you for going out of your way to take more?

Damn you Cantor

Shutterstock

Dividing up infinity is stressful.

Unlimited leave also hurts workers through decision paralysis. Allocating unlimited resources can be daunting, and just as research has shown that giving workers too many 401(k) mutual funds to choose from can lead to less and worse savings, unlimited vacation can overwhelm workers to the point that they don't use it to full effect.

So why do companies offer unlimited vacation? Two reasons. One, they know that workers won't actually use it, so it doesn't pose much of a danger from the employer's standpoint. Two, it eliminates a liability from their books. Vacation days show up as a cost on balance sheets, so unlimited vacation makes life easier on the accounting front.

Here's a better idea. Instead of unlimited leave and vacation, firms should just have generous leave and vacation: a year for new parents and five weeks a year for vacation with no rollover, say. If employers want to make that a minimum to increase flexibility, even better. But refusing to give number figures doesn't help matters.


06 Aug 05:00

Comic for 2015.08.06

lbstopher

click for video.

New Cyanide and Happiness Comic
07 Aug 10:07

The Money Saved by Chris Christie's Plan to Means Test Social Security

by dean.baker1@verizon.net (Dean Baker)

In case you were wondering whether we can substantially improve the financing of Social Security by means-testing benefits, as Governor Christie advocated in the Republican candidate debate, CEPR has the answer for you. We did a paper a few years back on this very issue.

The key point is that, while the rich have a large share of the income, they don't have a large share of Social Security benefits. That is what we would expect with a progressive payback structure in a program with a cap on taxable income. When we did the paper, less than 0.6 percent of benefits went to individuals with non-Social Security income over $200,000. Since incomes have risen somewhat in the last five years, it would be around 1.1 percent of benefits today.

However we're not going to be able to zero out benefits for everyone who has non-Social Security income over $200,000, otherwise we would find lots of people with incomes of $199,900. As a practical matter, we would have to phase out benefits. A rapid phase out would be losing 20 cents of benefits for each dollar that the person's income exceeds $200,000.

This would mean, for example, that if a person had an income of $220,000, they would see their benefits reduced by $4,000. This creates a very high marginal tax rate (people are also paying income tax), which would presumably mean some response in that people adjust their behavior since they are paying well over 50 cents of an additional dollar of income in taxes. If this was a person who was still working and paying Social Security taxes, the effective marginal tax rate would be over 70 percent.

By our calculations, this 20 percent phase out would reduce Social Security payouts by roughly 0.6 percent of payouts, the equivalent of an increase in the payroll tax of around 0.09 percentage point. That's not zero, but it does not hugely change the finances of the program.

06 Aug 12:05

Elephants in the room

THE first televised debate among the Republican candidates for president takes place in Cleveland on the evening of August 6th. Fox News is broadcasting the event, which, in a crowded field of runners, it has limited to ten participants based on their standing in the polls. Chris Christie and John Kasich just about made the cut, but Rick Perry, Bobby Jindal, Rick Santorum and the rest will have to settle for taking part in a separate televised meeting (dubbed by some as the losers’ forum) that will precede the big show. The debate is the first big opportunity for the candidates to present themselves on a national stage. They will hope to make a good impression and avoid the kind of gaffes (Rick Perry’s “Oops!” in the 2012 race) that can bring a campaign crashing to the ground.

Will Jeb Bush trip up over his position on the Iraq war? Can Marco Rubio rumble up the courage to speak for immigration reform? Will Donald Trump say something outrageously silly (of course he will). But before settling down to watch the fun, take some time to meet the candidates in our graphic

Continue reading
05 Aug 14:43

Saturday Morning Breakfast Cereal - Supernatural Selection

by admin@smbc-comics.com
lbstopher

this is my favorite.

Hovertext: Your move, people who aren't reductionists.


New comic!
Today's News:
05 Aug 11:18

Kate Pierson Gets Married

by Joe Jervis
Via the Daily Mail:
Kate Pierson, co-founder of the new wave party band The B-52s, has married her longtime lover Monica Coleman in Hawaii. The 67-year-old singer/songwriter took to her Facebook page on Tuesday to share photos of her and her new wife with the caption: ‘#itsofficial!’ A series of snaps show the happy couple post-ceremony as they don chic wedding gowns with Pierson’s in pink with a elegant Victorian style. A rep for Pierson told E! News: "The wedding was attended by the entire B-52s band, Sia Furler and husband Erik Anders Lang were the witnesses. Sia performed the song she wrote for the couple Crush Me With You Love on Pierson's first solo album Guitars and Microphones and was accompanied by the famous Hawaii music group the Lim Family who serenaded the attendees during the event. Fred Schneider made the best man speech."
RELATED: B-52s member Keith Strickland married his longtime boyfriend in Key West shortly after Florida legalized same-sex marriage in January.
04 Aug 17:27

Obi-Juan Cosplayer at San Japan 2015

by Brad
lbstopher

via Bewarethewumpus.

06d
04 Aug 21:48

Generalised Linear Models in R

lbstopher

Who doesn't love ice cream?

Linear models are the bread and butter of statistics, but there is a lot more to it than taking a ruler and drawing a line through a couple of points.

Some time ago Rasmus Bååth published an insightful blog article about how such models could be described from a distribution centric point of view, instead of the classic error terms convention.

I think the distribution centric view makes generalised linear models (GLM) much easier to understand as well. That’s the purpose of this post.

Using data on ice cream sales statistics I will set out to illustrate different models, starting with traditional linear least square regression, moving on to a linear model, a log-transformed linear model and then on to generalised linear models, namely a Poisson (log) GLM and Binomial (logistic) GLM. Additionally, I will run a simulation with each model.

Along the way I aim to reveal the intuition behind a GLM using Ramus’ distribution centric description. I hope to clarify why one would want to use different distributions and link functions.

The data

Here is the example data set I will be using. It shows the number of ice creams sold at different temperatures.
icecream 
I will be comparing the results from this model with others. Hence, I write a helper function to provide consistent graphical outputs:
basicPlot 
As expected more ice cream is sold at higher temperature.

The challenge

I would like to create a model that predicts the number of ice creams sold for different temperatures, even outside the range of the available data.

I am particularly interested in how my models will behave in the more extreme cases when it is freezing outside, say when the temperature drops to 0ºC and also what it would say for a very hot summer's day at 35ºC.

Linear least square

My first approach is to take a ruler and draw a straight line through the points, such that it minimises the average distance between the points and the line. That's basically a linear least square regression line:
lsq.mod 
That's easy and looks not unreasonable.

Linear regression

For the least square method I didn't think about distributions at all. It is just common sense, or is it?

Let's start with a probability distribution centric description of the data.

I believe the observation \(y_i\) was drawn from a Normal (aka Gaussian) distribution with a mean \(\mu_i\), depending on the temperature \(x_i\) and a constant variance \(\sigma^2\) across all temperatures.

On another day with the same temperature I might have sold a different quantity of ice cream, but over many days with the same temperature the number of ice creams sold would start to fall into a range defined by \(\mu_i\pm\sigma\).

Thus, using a distribution centric notation my model looks like this:
\[
y_i \sim \mathcal{N}(\mu_i, \sigma^2), \\
\mathbb{E}[y_i]=\mu_i = \alpha + \beta x_i \; \text{for all}\; i
\]Note, going forward I will drop the \(\text{for all}\; i\) statement for brevity.

Or, alternatively I might argue that the residuals, the difference between the observations and predicted values, follow a Gaussian distribution with a mean of 0 and variance \(\sigma^2\):
\[
\varepsilon_i = y_i - \mu_i \sim \mathcal{N}(0, \sigma^2)
\]Furthermore, the equation
\[
\mathbb{E}[y_i]=\mu_i = \alpha + \beta x_i
\]states my belief that the expected value of \(y_i\) is identical to the parameter \(\mu_i\) of the underlying distribution, while the variance is constant.

The same model written in the classic error terms convention would be:
\[
y_i = \alpha + \beta x_i + \varepsilon_i, \text{ with }
\varepsilon_i \sim \mathcal{N}(0, \sigma^2)
\]I think the probability distribution centric convention makes it clearer that my observation is just one realisation from a distribution. It also emphasises that the parameter of the distribution is modelled linearly.

To model this in R explicitly I use the glm function, in which I specify the "response distribution" (namely the number of ice creams) as Gaussian and the link function from the expected value of the distribution to its parameter (i.e. temperature) as identity. Indeed, this really is the trick with a GLM, it describes how the distribution of the observations and the expected value, often after a smooth transformation, relate to the actual measurements (predictors) in a linear way. The 'link' is the inverse function of the original transformation of the data.

That's what its says on the GLM tin and that's all there is to it!

(Actually, there is also the bit, which estimates the parameters from the data via maximum likelihood, but I will skip this here.)

lin.mod 
Thus, to mimic my data I could generate random numbers from the following Normal distribution:
\[
y_i \sim \mathcal{N}(\mu_i, \sigma^2) \text{ with }
\mu_i = -159.5 + 30.1 \; x_i \text{ and }
\sigma = 38.1
\]Although the linear model looks fine in the range of temperatures observed, it doesn't make much sense at 0ºC. The intercept is at -159, which would mean that customers return on average 159 units of ice cream on a freezing day. Well, I don't think so.

Log-transformed linear regression

Ok, perhaps I can transform the data first. Ideally I would like ensure that the transformed data has only positive values. The first transformation that comes to my mind in those cases is the logarithm.

So, let's model the ice cream sales on a logarithmic scale. Thus my model changes to:
\[
\log(y_i) \sim \mathcal{N}(\mu_i, \sigma^2)\\
\mathbb{E}[\log(y_i)]=\mu_i = \alpha + \beta x_i
\]This model implies that I believe the sales follow a log-normal distribution:\[y_i \sim \log\mathcal{N}(\mu_i, \sigma^2)\]Note, the log-normal distribution is skewed to the right, which means that I regard higher sales figures more likely than lower sales figures.

Although the model is still linear on a log-scale, I have to remember to transform the predictions back to the original scale because \(\mathbb{E}[\log(y_i)] \neq \log(\mathbb{E}[y_i])\). This is shown below. For a discussion of the transformation of the lognormal distribution see for example the help page of the R function rlnorm.
\[
y_i \sim \log\mathcal{N}(\mu_i, \sigma^2)\\
\mathbb{E}[y_i] = \exp(\mu_i + \sigma^2/2)
\]
log.lin.mod 

This plot looks a little better than the previous linear model and it predicts that I would sell, on average, 82 ice creams when the temperature is 0ºC:
exp(coef(log.lin.mod)[1])
## (Intercept) 
##    81.62131
Although this model makes a little more sense, it appears that is predicts too many sales at the low and high-end of the observed temperature range. Furthermore, there is another problem with this model and the previous linear model as well. The assumed model distributions generate real numbers, but ice cream sales only occur in whole numbers. As a result, any draw from the model distribution should be a whole number.

Poisson regression

The classic approach for count data is the Poisson distribution.

The Poisson distribution has only one parameter, here \(\mu_i\), which is also its expected value. The canonical link function for \(\mu_i\) is the logarithm, i.e. the logarithm of the expected value is regarded a linear function of the predictors. This means I have to apply the exponential function to the linear model to get back to the original scale. This is distinctively different to the log-transformed linear model above, where the original data was transformed, not the expected value of the data. The log function here is the link function, as it links the transformed expected value to the linear model.

Here is my model:
\[
y_i \sim \text{Poisson}(\mu_i)\\
\mathbb{E}[y_i]=\mu_i=\exp(\alpha + \beta x_i) = \exp(\alpha) \exp(\beta x_i)\\
\log(\mu_i) = \alpha + \beta x_i\]Let me say this again, although the expected value of my observation is a real number, the Poisson distribution will generate only whole numbers, in line with the actual sales.
pois.mod 

This looks pretty good. The interpretation of the coefficients should be clear now. I have to use the exp function to make a prediction of sales for a given temperature. However, R will do this for me automatically, if I set in the predict statement above type="response".

From the coefficients I can read off that a 0ºC I expect to sell \(\exp(4.45)=94\) ice creams and that with each one degree increase in temperature the sales are predicted to increase by \(\exp(0.076) - 1 = 7.9\%\).

Note, the exponential function turns the additive scale into a multiplicative one.

So far, so good. My model is in line with my observations. Additionally, it will not predict negative sales and if I would simulate from a Poisson distribution with a mean given by the above model I will always only get whole numbers back.

However, my model will also predict that I should expect to sell over 1000 ice creams if the temperature reaches 32ºC:
predict(pois.mod, newdata=data.frame(temp=32), type="response")
##        1 
## 1056.651
Perhaps the exponential growth in my model looks a little too good to be true. Indeed, I am pretty sure that my market will be saturated at around 800. Warning: this is a modelling assumption from my side!

Binomial regression

Ok, let's me think about the problem this way: If I have 800 potential sales then I'd like to understand the proportion sold at a given temperature.

This suggests a binomial distribution for the number of successful sales out of 800. The key parameter for the binomial distribution is the probability of success, the probability that someone will buy ice cream as a function of temperature.

Dividing my sales statistics by 800 would give me a first proxy for the probability of selling all ice cream.

Therefore I need an S-shape curve that maps the sales statistics into probabilities between 0 and 100%.

A canonical choice is the logistic function:
\[
\text{logit}(u) = \frac{e^u}{e^u + 1} = \frac{1}{1 + e^{-u}}
\]
With that my model can be described as:
\[
y_i \sim \text{Binom}(n, \mu_i)\\
\mathbb{E}[y_i]=\mu_i=\text{logit}^{-1}(\alpha + \beta x_i)\\
\text{logit}(\mu_i) = \alpha + \beta x_i\]
market.size 

The chart doesn't look too bad at all!

As the temperature increases higher and higher this model will predict that sales will reach market saturation, while all the other models so far would predict higher and higher sales.

I can predict sales at 0ºC and 35ºC using the inverse of the logistic function, which is given in R as plogis:

# Sales at 0 Celsius
plogis(coef(bin.glm)[1])*market.size
## (Intercept) 
##    39.09618
# Sales at 35 Celsius
plogis(coef(bin.glm)[1] +  coef(bin.glm)[2]*35)*market.size
## (Intercept) 
##    745.7449
So, that is 39 ice creams at 0ºC and 746 ice creams at 35ºC. Yet, these results will of course change if I change my assumptions on the market size. A market size of 1,000 would suggest that I can sell 55 units at 0ºC and 846 at 35ºC.

Summary

Let's bring all the models together into one graph, with temperatures ranging from 0 to 35ºC.
temp 

The chart shows the predictions of my four models over a temperature range from 0 to 35ºC. Although the linear model looks OK between 10 and perhaps 30ºC, it shows clearly its limitations. The log-transformed linear and Poisson models appear to give similar predictions, but will predict an ever accelerating increase in sales as temperature increases. I don't believe this makes sense as even the most ice cream loving person can only eat so much ice cream on a really hot day. And that's why I would go with the Binomial model to predict my ice cream sales.

Simulations

Having used the distribution centric view to describe my models leads naturally to simulations. If the models are good, then I shouldn't be able to identify the simulation from the real data.

In all my models the linear structure was
\[
g(\mu_i) = \alpha + \beta x_i\]or in matrix notation
\[
g(\mu) = A v\] with \(A_{i,\cdot} = [1, x_i]\) and \(v=[\alpha, \beta]\), whereby \(A\) is the model matrix, \(v\) the vector of coefficients and \(g\) the link function.

With that being said, let's simulate data from each distribution for the temperatures measured in the original data and compare against the actual sales units.
n 

The chart displays one simulation per observation from each model, but I believe it shows already some interesting aspects. Not only do I see that the Poisson and Binomial model generate whole numbers, while the Normal and log-transformed Normal predict real numbers, I notice the skewness of the log-normal distribution in the red point at 19.4ºC.

Furthermore, the linear model predicts equally likely figures above and below the mean and at 16.4ºC the prediction appears a little low - perhaps as a result.

Additionally the high sales prediction at 25.1ºC of the log-transformed Normal and Poisson model shouldn't come as a surprise either.

Again, the simulation of the binomial model seems the most realistic to me.

Conclusions

I hope this little article illustrated the intuition behind generalised linear models. Fitting a model to data requires more than just applying an algorithm. In particular it is worth to think about:
  • the range of expected values: are they bounded or range from \(-\infty\) to \(\infty\)?
  • the type of observations: do I expect real numbers, whole numbers or proportions?
  • how to link the distribution parameters to the observations

There are many aspects of GLMs which I haven't touched on here, such as:
  • all the above models incorporate a fixed level of volatility. However, in practice, the variability of making a sale at low temperatures might be significantly different than at high temperatures. Check the residual plots and consider an over-dispersed model.
  • I used so called canonical link functions in my models, which have nice theoretical properties, but other choices are possible as well.
  • the distribution has to be chosen from the exponential family, e.g. Normal, Gamma, Poisson, binomial, Tweedie, etc.
  • GLMs use maximum likelihood as the criteria for fitting the models. I sketched the difference between least square and maximum likelihood in an earlier post.

R code

You find the code of this post also on GitHub.

Session Info

R version 3.2.1 (2015-06-18)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X 10.10.4 (Yosemite)

locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] stats graphics grDevices utils datasets methods base     

other attached packages:
[1] arm_1.8-6 lme4_1.1-8 Matrix_1.2-1 MASS_7.3-42 

loaded via a namespace (and not attached):
 [1] minqa_1.2.4     tools_3.2.1     coda_0.17-1     abind_1.4-3    
 [5] Rcpp_0.11.6     splines_3.2.1   nlme_3.1-121    grid_3.2.1     
 [9] nloptr_1.0.4    lattice_0.20-31
04 Aug 21:16

Kevin Bacon: We Need More Naked Men

by Joe Jervis