Shared posts

01 Oct 15:40

Don’t trust Rasmussen polls!

by Andrew

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Rlogo

Political scientist Alan Abramowitz brings us some news about the notorious pollster:

In the past 12 months, according to Real Clear Politics, there have been 72 national polls matching Clinton with Trump—16 polls conducted by Fox News or Rasmussen and 56 polls conducted by other polling organizations. Here are the results:

Trump has led or been tied with Clinton in 44 percent (7 of 16) of Fox and Rasmussen Polls: 3 of 5 Rasmussen Polls and 4 of 11 Fox News Polls.

Trump has led or been tied with Clinton in 7 percent (4 of 56) polls conducted by other polling organizations.

To put it another way, Fox and Rasmussen together have accounted for 22 percent of all national polls in the past year but they have accounted for 64 percent of the polls in which Trump has been leading or tied with Clinton.

Using Pollster’s tool that allows you to calculate polling averages with different types of polls and polling organizations excluded:

Current Pollster average: Clinton +2.7
Removing Rasmussen and Fox News: Clinton +7.7
Live Interview polls only: Clinton +8.8
Live interview polls without Fox News: Clinton +9.2

I find it remarkable that simply removing Rasmussen and Fox changes the average by 5 points.

Hey—I remember Rasmussen! They’re a bunch of clowns.

Krustytheclown

Here are a couple of old posts about Rasmussen.

From 2010:

Rasmussen polls are consistently to the right of other polls, and this is often explained in terms of legitimate differences in methodological minutiae. But there seems to be evidence that Rasmussen’s house effect is much larger when Republicans are behind, and that it appears and disappears quickly at different points in the election cycle.

From 2008:

I was looking up the governors’ popularity numbers on the web, and came across this page from Rasmussen Reports which shows Sarah Palin as the 3rd-most-popular governor. But then I looked more carefully. Janet Napolitano of Arizona is viewed as Excellent by 28% of respondents, Good by 27%, Fair by 26%, and Poor by 27%. That adds up to 108%! What’s going on? I’d think they would have a computer program to pipe the survey results directly into the spreadsheet. But I guess not, someone must be entering these numbers by hand. Weird.

I just checked that page again and it’s still wrong:

Screen Shot 2016-05-20 at 8.10.03 PM

What ever happened to good old American quality control?

But, hey, it’s a living. Produce crap numbers that disagree with everyone else and you’re gonna get headlines.

You’d think news organizations would eventually twig to this particular scam and stop reporting Rasmussen numbers as if they’re actually data, but I guess polls are the journalistic equivalent of crack cocaine.

Given that major news organizations are reporting whatever joke study gets released in PPNAS, I guess we shouldn’t be surprised they’ll fall for Rasmussen, time and time again. It’s inducing stat rage in me nonetheless.

If only science reporters and political reporters had the standards of sports reporters. We can only dream.

The post Don’t trust Rasmussen polls! appeared first on Statistical Modeling, Causal Inference, and Social Science.



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

The post Don’t trust Rasmussen polls! appeared first on All About Statistics.

17 Feb 19:29

Changing hiring practices towards research transparency: The first open science statement in a professorship advertisement

by FelixS

Engaging in open science practices increases knowledge as a common good, and ensures the reproducibility, verifiability and credibility of research. But some have the fear that on an individual strategic level (in particular from an early career perspective) engaging in research transparency could reduce a researcher’s chance to get a tenured position in academia.

University hiring decisions often are driven (amongst other criteria) by publication quantity and journal prestige: “Several universities base promotion decisions on threshold h-index values and on the number of articles in ‘high-impact’ journals” (Hicks, Wouters, Waltman, de Rijcke, & Rafols, 2015), and Nosek, Spies, & Motyl (2012) mention “[…] the prevailing perception that publication numbers and journal prestige are the key drivers for professional success”.

We all know where this focus on pure quantity and too-perfect results led us: “In a world where researchers are rewarded for how many papers they publish, this can lead to a decrease in the truth value of our shared knowledge” (Nelson, Simmons, & Simonsohn, 2012), which can be seen in ongoing debates about low replication rates in psychology, medicine, or economics.

Doing studies with high statistical power, preparing open data, and trying to publish realistic results that are not hacked to (unrealistic) perfection will slow down scientists. Researchers engaging in these good research practices probably will have a smaller quantity of publications, and if that is the major selection criterion, they have a disadvantage in a competitive job market for tenured positions.

For this reason, hiring standards have to change as well towards a valuation of research transparency, and the department of psychology at LMU München did the first step into this direction.

Based on a suggestion of our Open Science Committee, the department added a paragraph to a professorship job advertisement which asks for an open science statement from the candidates:

W3_sozial

Here’s a translation of the open science paragraph:

Our department embraces the values of open science and strives for replicable and reproducible research. For this goal we support transparent research with open data, open material, and pre-registrations. Candidates are asked to describe in what way they already pursued and plan to pursue these goals.

This paragraph clearly communicates open science as a core value of our department.

Of course, criteria of research transparency will not be the only criteria of evaluation for candidates. But, to my knowledge, this is the first time that they are explicit criteria.

Jean-Claude Burgelman (Directorate General for Research and Innovation of the European Commission) says that “the career system has to gratify open science”. I hope that many more universities will follow the LMU’s lead with an explicit commitment to open science in their hiring practices.

The post Changing hiring practices towards research transparency: The first open science statement in a professorship advertisement appeared first on Nicebread.