A mother is outraged after a police officer in Hometown, Illinois, shot her family’s dog in the head right in front of her 6-year-old daughter on Friday. On Monday, the police chief announced that the officer involved in the shooting has been fired.
The tragic incident was set into motion after police received a 911 call from the owners of a 16-month-old German Shepherd/pit bull mix named Apollo, who said their family dog was on the loose. An officer eventually found the dog and reportedly followed it back to its home when the animal “growled and approached him in a threatening manner.”
The officer reportedly responded by pulling out his firearm and shooting the dog.
Nicole Echlin, the dog’s owner, said she was trying to get her dog inside when the officer drew his gun.
“We were in the lawn and the cop already had his gun out. I tried to call him in the house and he just stood there staring and I guess he showed his teeth and the cop just shot him,” she told NBC Chicago. “Right in front of me and my 6-year-old daughter.”
Echlin says her daughter started screaming and fell to the floor after seeing her dog get shot. The mother picked up her child and took her inside. As for Echlin, she says she’s “still in shock” over what happened to her dog.
Witnesses told the news station that the dog didn’t appear to be threatening the officer when the shooting occurred.
One neighbor claimed the dog “wasn’t doing anything,” while another said the animal was not moving or lunging at the officer when it was shot. Echlin says her dog “showed a little bit of his teeth” when the officer took out his gun and the cop immediately shot the dog.
Hometown Police Chief Charles Forsyth announced on Monday that he had fired the officer involved, even though he “may have been justified under the Illinois Use of Force statute governing deadly force.”
Read more stories from TheBlaze
Dress Kristen Bell as Mary Poppins and have her sing a catchy parody touting the benefits of a $10.10 minimum wage — it’s a ticket to viral success, but it’s also ripe for rebuttal.
Reason Magazine’s Remy had no trouble sweeping her arguments aside.
In a rebuttal video released Friday, Remy turned the lyric, “Just a three-dollar increase can make a living wage,” into, “Just three pages of econ and perhaps your mind will change.”
While championing voluntary exchange and the free market, Remy takes a jab at a company that doesn’t pay wages at all (to interns): Funny or Die, the comedy outfit behind the original “Mary Poppins Quits” video.
The original video (content warning: some objectionable language):
And Reason’s rebuttal:
Follow Zach Noble (@thezachnoble) on Twitter
Read more stories from TheBlaze
On the last day of my recent trip to Germany, I’d wanted to check out Deutschland’s brothels. The focus of my writing on sex work has been U.S.-centric thus far. So I wanted to speak to someone participating in sex work in a country where it’s legal. I was running out of time and euros, but it just so happened that the quickest route to my hotel after drinks with locals included an area known for its ladies of the night.
As we walked down a hookah-bar-lined street, the sex workers looked more empowered than any I’ve seen stateside. Tall and healthy-looking, with thick hair and thin waists, beautiful corsets shaping hourglasses, they certainly didn’t look oppressed—except perhaps by four-inch platform Lucite heels. (Those oppress any wearer.)
On our walk I learned that Germany’s decision to legalize prostitution not only helped sex workers, but actually decreased the number of human trafficking victims in the country. But on our stroll, one of my companions told me that German feminists are trying to recriminalize sex work. This is a mistake, she argued. Legalization has improved sex workers’ lives.
Turns out, she was right. According to the data, violence against sex workers is down, while sex workers’ quality of life is up. And after testing began, post-legalization, researchers discovered no difference in sexually transmitted infection rates between sex workers and the general population.
Opponents claim legalizing prostitution has actually increased human trafficking in the country. But the data don’t support that claim. In fact, they show the opposite. From 2001, the year the law legalizing sex work in Germany was passed, to 2011, cases of sex-based human trafficking shrank by 10 percent.
It’s true is that most German sex workers, 74 percent, are foreign born. However, Germany generally has a high immigration rate. Only 81 percent of people living in Germany were born there. Interestingly, right about when Germany legalized sex work, Eastern European countries joined the European Union, economic crises hit post-communist countries, and globalization increased immigration flows. But these migrant workers are hardly child sex slaves. The mean age of a sex worker in Germany is 31.
Besides not being supported by data, the claim that legalizing prostitution increased human trafficking also defies common-sense economics. Legalization has brought about reduced prices for sex acts people demand. Sure, one might still pay a lot for high-quality service. And as I learned on this trip, nothing is cheap in Germany. But the days of paying more than 15 euros for sex from someone who clearly doesn’t want to be there are over. Time Magazine spoke to one tourist who described the country as “the Aldi for prostitutes.”
Whether you think such sexual transactions are a good thing or a bad thing, the fact remains that criminalization makes things more expensive. And price drives pimps to find new ways to satisfy demand. Prices matter for trafficking because it costs a lot to kidnap someone and hold them against their will. The economic realities of legalization have brought about a situation in which it makes no sense for traffickers to keep their human chattel in Germany, where prices are lower. While it’s true that traffickers bring their victims through the country as a corridor, they normally keep going until they get to one of the countries where prostitution is still illegal, like France, where prices are higher.
In Germany, the sex workers are workers, not slaves. For a country that has always taken workers’ rights seriously (certainly more so than civil liberties) sex work is no exception. Workers there are represented by a union and are afforded full police protection when something goes wrong.
The importance of this benefit of legalization simply cannot be overstated. Violence is far more likely when violators know they won’t likely be reported.
And then there’s police abuse by sex workers worldwide: In Ireland, where prostitution is still criminalized, one study estimates that 30 percent of the abuse sex workers report comes from police. And “in South Africa,” writes Fordham human rights professor Chi Mgbako, “police officers often fine sex workers inordinate sums of money and pocket the cash, resulting in a pattern of economic extortion of sex workers by state agents.” Mgbako adds: “South African sex workers report that police confiscate condoms to use as evidence of prostitution; demand sexual favors in exchange for release from jail or to avoid arrest; physically assault and rape sex workers; actively encourage or passively condone inmate sexual abuse of transgender female sex workers assigned to male prison cells; and use municipal laws to harass and arrest sex workers even when they’re engaged in activities unrelated to prostitution.” Of course, prostitutes are abused in the United States as well.
Some German feminists want to criminalize demand instead of supply, which makes sense on the surface. Why lock up the women, but not the men? The Swedish did exactly that, using a new twist on an old idea, fighting trafficking by criminalizing prostitution. They passed laws that prevented sex workers from working together, recommending each other's customers, advertising or working from property they rent or own, or even cohabitating with a partner.
The result was sex workers enduring harassment instead of help from police and being forced to undergo invasive searches. Sex workers in Sweden were made to testify against their customers and ended up relying more on their pimps to find clients.
And the result was no change in the number of sex workers or their customers.
The truth is that laws against sex work actually help human traffickers. This is why the UN Human Rights Council published a report from the Global Alliance Against Traffic in Women that criticizes anti-trafficking measures that restrict sex workers.
According to the report, “Sex workers are negatively impacted by anti-trafficking measures.” Specifically, “The criminalisation of clients has not reduced trafficking or sex work, but has increased sex workers’ vulnerability to violence, harmed HIV responses, and infringed on sex workers’ rights.”
Furthermore, “Anti-trafficking discussions on demand have historically been stymied by anti-prostitution efforts to eradicate the sex work sector by criminalising clients, despite protests from sex workers rights groups and growing evidence that such approaches do not work.”
Sure enough, my very brief encounter with German sex workers seem to bolster that view.
According to Thomas Piketty, the rich get richer across generations. That increases inequality of wealth, and wealth inequality is a major social problem. He advocates a worldwide wealth tax to reduce it.
The widespread interest in Piketty’s argument has led me to puzzle about the source of the incomes of the rentiers—the idle rich—and the economic relationship between them and the rest of the population.
As I understand it—and I admit I have read only reviews—Piketty is not much concerned about first-generation creation of wealth. He recognizes that outstanding performers, athletes, and entrepreneurs such as Oprah Winfrey, LeBron James, Bill Gates, and Steve Jobs earn their wealth by providing the rest of us valuable entertainment, tools, and products that make us better off. Rather, he focuses on the inheritors of wealth who create and produce no goods or services, but merely live off the income their inherited wealth generates for them. He seems particularly concerned that wealth can grow over time in the possession of heirs who do nothing to earn or deserve it.
Set aside the matter of wealth inequality. I want to examine how—by what actions or inactions—the rich pass increased wealth to their heirs, and what that process means for the non-rich. Is it bad for the rest of the population for the idle rich to grow increasingly wealthy?
Let’s begin with the source of the income of the idle rich in a free economy. We need this proviso, “in a free economy,” because in an unfree, crony capitalist economy, the idle rich can receive income taken from others through the political process. We’re not talking about crony capitalism, but rather only wealth derived from existing wealth.
Whence come the “rents” these rentiers collect? Of what do they consist?
Well, they consist of interest on the rentier’s bank deposits and bonds (let us think here of corporate bonds only, because government bonds must be paid off with taxes taken from the general population), dividends on the rentier’s stocks, rental payments on the rentier’s land and buildings, and capital gains on any bonds or stocks or land he sells which are now more valuable than when he bought them.
How are these companies able to pay interest, dividends, or rental payments? If they are able, they will make the payments because they have used the rentier’s wealth productively. Perhaps they have built facilities with it, leased equipment with it, done research with it, or hire employees with it. When and if all these together produce new value for the companies’ customers in excess of the amount the rentier has invested with them, their capital value increases.
But think about what that all means: Who has the rentier’s wealth in this process? In the legal sense, the rentier has it: he has legal title to it. But in the practical sense, the employees of the different businesses have the rentier’s capital wealth: They work in the buildings, operate the equipment, use the research facilities. They use the rentier’s capital to earn their own incomes, and the better the capital equipment, the higher their productivity and hence the higher their incomes.
Another group that benefits from productive use of the idle rich’s wealth is the customers of the businesses in which that wealth is tied up. They get better products at lower prices.
By contrast, consider what happens if the rentier uses his wealth by himself, for himself, so that he has his wealth in both the legal and the practical sense. Suppose he spends his income—or even cashes in some stocks, bonds, and real estate—for goods that he personally can use and enjoy. Suppose he indulges himself, buying extra houses (or mansions) in scenic places, jetting around the world, driving hot cars, taking exciting trips, dining at expensive restaurants, watching professional sports from box seats to which he has season tickets, and so on. What happens to such a rentier’s income? And will this affect his net worth? Consumption spending is not the way to increase family wealth for the next generation.
In short, to earn income from their wealth, the idle rich must let other people use it.
In a free economy, the wealthy can stay wealthy only by putting the great bulk of their wealth into the hands of businesspeople who use it to put buildings and equipment and employees to work creating goods and services for the rest of us. The great bulk of rentiers’ wealth, in a free market, is at the disposal of and in service to other people. The rentier who gets continually wealthier merely has legal title to “his” capital. Others have the use and enjoyment of it.
Is acquisitive investing damaging to society? Does it harm those with less wealth? On the contrary, it raises the living standards of others. Indeed, there is probably nothing better the rich can do for their fellows than live simply, leave their capital invested, and watch it grow.
Glenn Beck and radio co-hosts Pat Gray and Stu Burguiere conducted a little in-house “science experiment” on Friday after reading about an ice cream sandwich that seemingly didn’t melt after being left outside for hours.
“So [WCPO-TV] did an experiment with Haagen-Dazs, which melted quickly into a puddle because that’s, you know, real stuff,” Gray explained. “The Klondike sandwich melted to a fair extent, they said. The Walmart sandwich, though it melted a bit, remained the most solid in appearance and still looked like a sandwich. And I’m looking at the photo of it. They don’t say at what hour this was taken, but it’s pretty well intact.”
Burguiere didn’t seem particularly disturbed that there may be chemicals and preservatives in his food, saying sarcastically: “I mean, I know there are piles of bodies outside of every single hospital in America because of the deadly Walmart ice cream sandwiches…”
“Come on, you know that the crap we put in our food is killing us,” Beck responded.
“I don’t believe that at all,” Burguiere said. “Not at all. The word ‘natural’ does not mean you’re going to be healthy. The word ‘chemicals’ does not mean you’re going to be sick.”
More from the discussion below:
The group decided to re-create the experiment using three different types of ice cream sandwiches: a Blue Bunny ice cream sandwich and two they picked up from Walmart, one of which was 97 percent fat-free.
After just one hour, all three ice cream sandwiches had melted in just 77-degree weather.
“They all melted,” Beck said, adding: “The Walmart sandwich that is not fat-free looks like it’s made out of all fake stuff, which it’s not. The one made out of fake stuff looks yummy…”
More from the “experiment” below:
The full episode of The Glenn Beck Program, along with many other live-streaming shows and thousands of hours of on-demand content, is available on just about any digital device. Click here to watch every Glenn Beck episode from the past 30 days for just $1!
Read more stories from TheBlaze
This is a guest post by Ross McKitrick. Tim Vogelsang and I have a new paper comparing climate models and observations over a 55-year span (1958-2012) in the tropical troposphere. Among other things we show that climate models are inconsistent with the HadAT, RICH and RAOBCORE weather balloon series. In a nutshell, the models not only predict far too much warming, but they potentially get the nature of the change wrong. The models portray a relatively smooth upward trend over the whole span, while the data exhibit a single jump in the late 1970s, with no statistically significant trend either side.
Our paper is called “HAC-Robust Trend Comparisons Among Climate Series With Possible Level Shifts.” It was published in Environmetrics, and is available with Open Access thanks to financial support from CIGI/INET. Data and code are here and in the paper’s SI.
Tropical Troposphere Revisited
The issue of models-vs-observations in the troposphere over the tropics has been much-discussed, including here at CA. Briefly to recap:
The Long Balloon Record
After publishing MMH2010 I decided to extend the analysis back to the start of the weather balloon record in 1958. I knew that I’d have to deal with the Pacific Climate Shift in the late 1970s. This is a well-documented phenomenon (see ample references in the paper) in which a major reorganization of ocean currents induced a step-change in a lot of temperature series around the Pacific rim, including the tropospheric weather balloon record. Fitting a linear trend through a series with a positive step-change in the middle will bias the slope coefficient upwards. When I asked Tim if the VF method could be used in an application allowing for a suspected mean shift, he said no, it would require derivation of a new asymptotic distribution and critical values, taking into account the possibility of known or unknown break points. He agreed to take on the theoretical work and we began collaborating on the paper.
Much of the paper is taken up with deriving the methodology and establishing its validity. For readers who skip that part and wonder why it is even necessary, the answer is that in serious empirical disciplines, that’s what you are expected to do to establish the validity of novel statistical tools before applying them and drawing inferences.
Our paper provides a trend estimator and test statistic based on standard errors that are valid in the presence of serial correlation of any form up to but not including unit roots, that do not require the user to choose tuning parameters such as bandwidths and lag lengths, and that are robust to the possible presence of a shift term at a known or unknown break point. In the paper we present various sets of results based on three possible specifications: (i) there is no shift term in the data, (ii) there is a shift at a known date (we picked December 1977) and (iii) there is a possible shift term but we do not know when it occurs.
When we began working on the paper a few years ago, the then-current data was from the CMIP3 model library, which is what we use in the paper. The AR5 used the CMIP5 library so I’ll generate results for those runs later, but for now I’ll discuss the CMIP3 results.
We used 23 CMIP3 models and 3 observational series. This is Figure 4 from our paper (click for larger version):
Each panel shows the trend terms (oC/decade) and HAC-robust confidence intervals for CMIP3 models 1—23 (red) and the 3 weather balloon series (blue). The left column shows the case where we don’t control for a step-change. The right column shows the case where we do, dating it at December 1977. The top row is MT, the bottom row is LT.
You can see that the model trends remain about the same with or without the level shift term, though the confidence intervals widen when we allow for a level shift. When we don’t allow for a level shift (left column), all 6 balloon series exhibit small but significant trends. When we allow for a level shift (right column), placing it at 1977:12, all observed trends become very small and statistically insignificant. All but two models (GFDL 2.0 (#7) and GFDL2.1 (#8)) yield positive and significant trends either way.
Mainstream versus Reality
Figure 3 from our paper (below) shows the model-generated temperature data, mean GCM trend (red line) and the fitted average balloon trend (blue dashed line) over the sample period. In all series (including all the climate models) we allow a level shift at 1977:12. Top panel: MT; bottom panel: LT.
The dark red line shows the trend in the model ensemble mean. Since this displays the central tendency of climate models we can take it to be the central tendency of mainstream thinking about climate dynamics, and, in particular, how the climate responds to rising GHG forcing. The dashed blue line is the fitted trend through observations; i.e. reality. For my part, given the size and duration of the discrepancy, and the fact that the LT and MT trends are indistinguishable from zero, I do not see how the “mainstream” thinking can be correct regarding the processes governing the overall atmospheric response to rising CO2 levels. As the Thorne et al. review noted, a lack of tropospheric warming “would have fundamental and far-reaching implications for understanding of the climate system.”
Figures don’t really do justice to the clarity of our results: you need to see the numbers. Table 7 summarizes the main test scores on which our conclusions are drawn.
The first column indicates the data series being tested. The second column lists the null hypothesis. The third column gives the VF score, but note that this statistic follows a non-standard distribution and critical values must either be simulated or bootstrapped (as discussed in the paper). The last column gives the p-value.
The first block reports results with no level shift term included in the estimated models. The first 6 rows shows the 3 LT trends (with the trend coefficient in C/decade in brackets) followed by the 3 MT trends. The test of a zero trend strongly rejects in each case (in this case the 5% critical value is 41.53 and 1% is 83.96). The next two rows report tests of average model trend = average observed trend. These too reject, even ignoring the shift term.
The second block repeats these results with a level shift at 1977:12. Here you can see the dramatic effect of controlling for the Pacific Climate Shift. The VF scores for the zero-trend test collapse and the p-values soar; in other words the trends disappear and become practically and statistically insignificant. The model/obs trend equivalence tests strongly reject again.
The next two lines show that the shift terms are not significant in this case. This is partly because shift terms are harder to identify than trends in time series data.
The final section of the paper reports the results when we use a data-mining algorithm to identify the shift date, adjusting the critical values to take into account the search process. Again the trend equivalence tests between models and observations reject strongly, and this time the shift terms become significant or weakly significant.
We also report results model-by-model in the paper. Some GCMs do not individually reject, some always do, and for some it depends on the specification. Adding a level shift term increases the VF test scores but also increases the critical values so it doesn’t always lead to smaller p-values.
Why test the ensemble average and its distribution?
The IPCC (p. 772) says the observations should be tested against the span of the entire ensemble of model runs rather than the average. In one sense we do this: model-by-model results are listed in the paper. But we also dispute this approach since the ensemble range can be made arbitrarily wide simply by adding more runs with alternative parameterizations. Proposing a test that requires data to fall outside a range that you can make as wide as you like effectively makes your theory unfalsifiable. Also, the IPCC (and everyone else) talks about climate models as a group or as a methodological genre. But it doesn’t provide any support for the genre to observe that a single outlying GCM overlaps with the observations, while all the others tend to be far away. Climate models, like any models (including economic ones) are ultimately large, elaborate numerical hypotheses: if the world works in such a way, and if the input variables change in such-and-such a way, then the following output variables will change thus and so. To defend “models” collectively, i.e. as a related set of physical hypotheses about how the world works, requires testing a measure of their central tendency, which we take to be the ensemble mean.
In the same way James Annan dismissed the MMH2010 results, saying that it was meaningless to compare the model average to the data. His argument was that some models also reject when compared to the average model, and it makes no sense to say that models are inconsistent with models, therefore the whole test is wrong. But this is a non sequitur. Even if one or more individual models are such outliers that they reject against the model average, this does not negate the finding that the average model rejects against the observed data. If the central tendency of models is to be significantly far away from reality, the central tendency of models is wrong, period. That the only model which reliably does not reject against the data (in this case GFDL 2.1) is an outlier among GCMs only adds to the evidence that the models are systematically biased.
There’s a more subtle problem in Annan’s rhetoric, when he says “Is anyone seriously going to argue on the basis of this that the models don’t predict their own behaviour?” In saying this he glosses over the distinction between a single outlier model and “the models” as group, namely as a methodological genre. To refer to “the models” as an entity is to invoke the assumption of a shared set of hypotheses about how the climate works. Modelers often point out that GCMs are based on known physics. Presumably the laws of physics are the same for everybody, including all modelers. Some climatic processes are not resolvable from first principles and have to be represented as empirical approximations and parameterizations, hence there are differences among specific models and specific model runs. The model ensemble average (and its variance) seems to me the best way to characterize the shared, central tendency of models. To the extent a model is an outlier from the average, it is less and less representative of models in general. So the average among the models seems as good a way as any to represent their central tendency, and indeed it would be difficult to conceive of any alternative.
Over the 55-years from 1958 to 2012, climate models not only significantly over-predict observed warming in the tropical troposphere, but they represent it in a fundamentally different way than is observed. Models represent the interval as a smooth upward trend with no step-change. The observations, however, assign all the warming to a single step-change in the late 1970s coinciding with a known event (the Pacific Climate Shift), and identify no significant trend before or after. In my opinion the simplest and most likely interpretation of these results is that climate models, on average, fail to replicate whatever process yielded the step-change in the late 1970s and they significantly overstate the overall atmospheric response to rising CO2 levels.
A group of hackers are using a vulnerability in the Nest thermostat to secure it against Nest's remote data collection.
We last met Shaun Lovejoy when he claimed that mankind caused global temperatures to increase. At the 99.9% level, of course.
He’s now saying that the increase which wasn’t observed wasn’t there because of natural variability. But, he assures us, we’re still at fault
His entire effort is beside the point. If the “pause” wasn’t predicted, then the models are bad and the theories that drive them probably false. It matters not whether such pauses are “natural” or not.
Tell me honestly. Is this sentence in Lovejoy’s newest peer-reviewed (“Return periods of global climate
fluctuations and the pause”, Geophysical Research Letters) foray science or politics? “Climate change deniers have been able to dismiss all the model results and attribute the warming to natural causes.”
The reason scientists like Yours Truly have dismissed the veracity of climate models is for the eminently scientific reason that models which cannot make skillful forecasts are bad. And this is so even if you don’t want them to be. Even if you love them. Even if the models are consonant with a cherished and desirable ideology.
Up to a constant, Lovejoy’s curious model says the global temperature is caused by climate sensitivity (at double CO2) times the log of the ratio of the time varying CO2 concentration, all plus the “natural” global temperature.
There is no such thing. I mean, there is no such thing as a natural temperature in the absence of mankind. This is because mankind, like every other plant and animal species ever, has been influencing the climate since its inception. Only a denier would deny this.
Follow me closely. Lovejoy believes he can separate out the effects of humans on temperature and thus estimate what the temperature would be were man not around. Forget that such a quantity is of no interest (to any human being), or that such a task is hugely complex. Such estimates are possible. But so are estimates of temperature assuming the plot from the underrated pre-Winning Charlie Sheen movie The Arrival is true.
Let Lovejoy say what he will of Tnat(t) (as he calls it). Since this is meant to be science, how do we verify that Lovejoy isn’t talking out of his chapeau? How do we verify his conjectures? For that is all they are, conjectures. I mean, I could create my own estimate of Tnat(t), and so you could you—and so could anybody. Statistics is a generous, if not a Christian, field. The rule of statistical modeling is, Ask and ye shall receive. How do we tell which estimate is correct?
Answer: we cannot.
But—there’s always a but in science—we might believe Lovejoy was on to something if, and only if, his odd model were able to predict new data, data he had never before seen. Has he done this?
Answer: he has not.
His Figure shown above (global temp) might be taken as a forecast, though. His model is a juicy increase. Upwards and onwards! Anybody want to bet that this is the course the future temperature will actually take? If it doesn’t, Lovejoy is wrong. And no denying it.
After fitting his “GCM-free methodology” model, Lovejoy calculates the chances of seeing certain features in Tnat(t), all of which are conditional on his model and the correctness of Tnat(t). Meaning, if his model is fantasia, so are the probabilities about Tnat(t).
Lovejoy concludes his opus with the words, “We may still be battling the climate skeptic arguments that the models are
untrustworthy and that the variability is mostly natural in origin.”
Listen: if the GCMs (not just Lovejoy’s curious entry) made bad forecasts, they are bad models. It matters not that they “missed” some “natural variability.” The point is they made bad forecasts. That means that misidentified whatever it was that caused the temperature to take the values it did. That may be “natural variability” or things done by mankind. But it must be something. It doesn’t even matter if Lovejoy’s model is right: the GCMs were wrong.
He says the observed “pause” “has a convincing statistical explanation.” It has Lovejoy’s explanation. But I, or you, could build your own model and show that the “pause” does not have a convincing statistical explanation.
Besides, who gives a fig-and-a-half for statistical explanations? We want causal explanations. We want to know why things happen. We already know that they happened.
In today’s post, I will look at a new Naturemag climate reconstruction claiming unprecedentedness (h/t Bishop Hill): “Evolution of the Southern Annular Mode during the past millennium” (Abram et al Nature 2014, pdf). Unfortunately, it is marred by precisely the same sort of data mining and spurious multivariate methodology that has been repeatedly identified in Team paleoclimate studies.
The flawed reconstruction has been breathlessly characterized at the Conversation by Guy Williams, an Australian climate academic, as a demonstration that, rather than indicating lower climate sensitivity, the recent increase in Antarctic sea ice is further evidence that things are worse than we thought. Worse it seems than previously imagined even by Australian climate academics.
the apparent paradox of Antarctic sea ice is telling us that it [climate change] is real and that we are contributing to it. The Antarctic canary is alive, but its feathers are increasingly wind-ruffled.
A Quick Review of Multivariate Errors
Let me start by assuming that CA readers understand the basics of multivariate data mining. In an extreme case, if you do a multiple regression of a sine wave against a large enough network of white noise, you can achieve arbitrarily high correlations. (See an early CA post on this here discussing example from Phillips 1998.)
At the other extreme, if you really do have a network of proxies with a common signal, the signal is readily extracted through averaging without any ex post screening or correlation weighting with the target.
As discussed on many occasions, there are many seemingly “sensible” multivariate methods that produce spurious results when applied to modern trends. In our original articles on Mann et al 1998-1999, Ross and I observed that short-centered principal components on networks of red noise is strongly biased to the production of hockey sticks. A related effect is that screening large networks based on correlation to modern trends is also biased to the production of hockey sticks. This has been (more or less independently) observed at numerous climate blogs, but is little known in academic climate literature. (Ross and I noted the phenomenon in our 2009 PNAS comment on Mann et al 2008, citing an article by David Stockwell in an Australian mining newsletter, though the effect had been previously noted at CA and other blogs).
Weighting proxies by correlation to target temperature is the sort of thing that “makes sense” to climate academics, but is actually even worse than ex post correlation screening. It is equivalent to Partial Least Squares regression of the target against a network (e.g. here for a discussion). Any regression against a large number of predictors is vulnerable to overfitting, a phenomenon well understood with Ordinary Least Squares regression, but also applicable to Partial Least Squares regression. Hegerl et al 2007 (cited by Abram et al as an authority) explicitly weighted proxies by correlation to target temperature. See the CA post here for a comparison of methods.
If one unpacks the linear algebra of Mann et al 1998-1999, an enterprise thus far neglected in academic literature, one readily sees that its regression phase in the AD1400 and AD1000 steps boils down to weighting proxies by correlation to the target (see here) – this is different from the bias in the principal components step that has attracted more publicity.
At Climate Audit, I’ve consistently argued that relatively simple averaging can recover the “signal” from networks with a common signal (which, by definition “proxies” ought to have). I’ve argued in favor of working from large population networks of like proxies without ex post screening or ex post correlation weighting.
The Proxy Network of Abram et al 2014
Abram et al used a network of 25 proxies, some very short (5 begin only in the mid-19th century) with only 6 reaching back to AD1000, the start of their reconstruction. They calibrated this network to the target SAM index over a calibration period of 1957-1995 (39 years.)
The network consists of 14 South American tree ring chronologies, 1 South American lake pigment series, one ice core isotope series from the Antarctic Peninsula and 9 ice core isotope series from the Antarctic continent. The Antarctic and South American networks are both derived from the previous PAGES2K networks, using the subset of South American proxies located south of 30S. (This eliminates the Quelccaya proxies, both of which were used upside down in the PAGES2K South American reconstruction.)
Abram et al described their proxy selection as follows:
We also use temperature-sensitive proxy records for the Antarctic and South America continental regions [5 - PAGES2k] to capture the full mid-latitude to polar expression of the SAM across the Drake Passage transect. The annually resolved proxy data sets compiled as part of the PAGES2k database are published and publically available5. For the South American data set we restrict our use to records south of 30 S and we do not use the four shortest records that are derived from instrumental sources. Details of the individual records used here and their correlation with the SAM are given in Supplementary Table 1.
However, their network of 14 South American tree ring chronologies is actually the product of heavy prior screening of an ex ante network of 104 (!!) chronologies. (One of the ongoing methodological problems in this field is the failure of authors to properly account for prior screening and selection).
The PAGES2K South American network was contributed by Neukom, the co-lead author of Gergis et al 2012. Neukom’s multivariate work is an almost impenetrable maze of ex post screening and ex post correlation weighting. If Mannian statistics is Baroque, Neukom’s is Rococo. CA readers will recall that non-availability of data deselected by screening was an issue in Gergis et al. (CA readers will recall that David Karoly implausibly claimed that Neukom and Gergis “independently” discovered the screening error in Gergis et al 2012 on the same day that Jean S reported it at Climate Audit.) Although Neukom’s proxy network has become increasingly popular in multiproxy studies, I haven’t been able to parse his tree ring chronologies as Neukom has failed to archive much of the underlying data and refused to provide it when requested.
Neukom’s selection/screening of these 14 chronologies was done in Neukom et al 2011 (Clim Dyn) using a highly non-standard algorithm which rated thousands of combinations according to verification statistics. While not a regression method per se, it is an ex post method and, if eventually parsed, will be subject to similar considerations as regression method – the balloon is still being squeezed.
The Multivariate Methodology of Abram et al 2014
Abram et al used a methodology equivalent to the regression methodology of the AD1400 and AD1000 steps of Mann et al 1998-1999 – a methodology later used (unaware) in Hegerl et al 2007, who are cited by Abram et al.
In this methodology, proxies are weighted by their correlation coefficient with the resulting composite scaled to the target. Abram et al 2014 described their multivariate method as follows (BTW “CPS” normally refers to unweighted composites):
We employ the widely used composite plus scale (CPS) methodology [5- PAGES2K,11 - Jones et al 2009, 12 - Hegerl et al 2007] with nesting to account for the varying length of proxies making up the reconstruction. For each nest the contributing proxies were normalized relative to the AD 1957-1995 calibration interval…
The normalized proxy records were then combined with a weighting [12- Hegerl et al 2007] based on their correlation coefficient (r) with the SAM during the calibration interval (Supplementary Table 1). The combined record was then scaled to match the mean and standard deviation of the instrumental SAM index during the calibration interval. Finally, nests were spliced together to provide the full 1,008-year SAM reconstruction.
Although Abram et al (and their reviewers) were apparently unaware, this methodology is formally equivalent to MBH99 regression methodology and to Partial Least Squares regression. Right away, one can see potential calibration period overfitting perils when one is using a network of 25 proxies to fit over a calibration period of only 29 years. Such overfitting is particularly bad when proxies are flipped over (see another old CA post here – I am unaware of anything equivalent in academic climate literature).
The Abram/PAGES2K South American Tree Ring Network
The Abram/PAGES2K South American tree ring network is an almost classic example of what not to do. Below is an excerpt from their Supplementary Table 1 listing their South American proxies, together with their correlation (r) to the target SAM index and the supposed “probability” of the correlation:
Right away you should be able to see the absurdity of this table. The average correlation of chronologies in the tree ring network to the target SAM index is a Mannian -0.01, with correlations ranging from -0.289 to +0.184.
Thare’s an irony to the average correlation being so low. Villalba et al 2012, also in Nature Geoscience, also considered a large network of Patagonian tree ring chronologies (many of which were identical to Neukom et al 2011 sites), showing a very noticeable decline in ring widths over the 20th century (with declining precipitation) and a significant negative correlation to Southern Annular Mode (specifically discussed in the article). It appears to me that Neukom’s prior screening of South American tree ring chronologies according to temperature (reducing the network from 104 to 14) made the network much less suitable for reconstruction of Southern Annular Mode (which is almost certainly more clearly reflected in precipitation proxies.)
The distribution of correlation coefficients in Abram et al is inconsistent with the network being a network of proxies for SAM. Instead of an average correlation of ~0, a network of actual proxies should have a significant positive (or negative) correlation, and, in a “good” network of proxies of the same type (e.g. Patagonian tree ring chronologies), all correlations will have the same sign.
Nonetheless, Abram et al claim that chronologies with the most extreme correlation coefficients within the network (both positive and negative) are also the most “significant” (as measured by their p-value.) They obtained this perverse result as follows: the “significance” of their correlations “were assessed relative to 10000 simulations on synthetic noise series with the same power spectrum as the real data [31 - Ebisuzaki, J. Clim 1997]“. Thus both upward-trending and downward-trending series were assessed as more “significant” within the population of tree ring chronologies and given higher weighting in the reconstruction.
The statistical reference of Abram et al was designed for a different problem. Their calculations of significance are done incorrectly. Neither their network of tree ring chronologies nor their multivariate method is suitable for their task. The coefficients clearly show the unsuitability.
A reconstruction using the methods of Abram et al 2014, especially accumulating the previous screening of Neukom et al 2011, is completely worthless for estimating prior Southern Annular Mode. This is different from being “WRONG!”, the adjective that is too quickly invoked in some skeptic commentary.
Despite my criticism, I think that proxies along the longitudinal transect of South America are extremely important and that the BAS Antarctic Peninsula ice core isotope series from James Ross Island is of great importance (and that it meets virtually all CA criteria for an ex ante “good” proxy.)
However, Abram et al is about as far from a satisfactory analysis of such proxies as one can imagine. It is too bad that Naturemag appears unequal to identifying even elementary methodological errors in articles that claim unprecedentedness. Perhaps they should reflect on their choice of peer reviewers for paleoclimate articles.
Witnesses say a pregnant woman in labor was prevented by authorities from crossing a Los Angeles street to a hospital Wednesday because the road had been closed for President Barack Obama’s impending motorcade.
The unidentified woman was barred from walking the few hundred feet to the hospital for at least 30 minutes as authorities waited for the president’s motorcade to pass by, witness Carrie Clifford told TheBlaze early Thursday morning.
“I felt bad for her,” Clifford said. “It does happen when Obama comes to L.A. or I’m sure anywhere else. It paralyzes the city, it does make it complicated.”
“You can’t do the things you had set out to do because the president is in town,” she added.
KNBC-TV reporter Robert Kovacik posted footage on video-sharing website Instagram depicting the incident.
“Woman in labor on bench as motorcade passes,” he wrote, “not allowed to cross street to get to #CedarsSinai.”
A spokesperson for the LAPD declined to comment on the incident early Thursday morning and referred TheBlaze to the Secret Service. A spokesperson for the Secret Service did not immediately return TheBlaze’s phone call.
Video, however, captured by Kovacik shows an unidentified sergeant explaining the circumstances.
“As soon as we can — it looks like the motorcade is coming through right about now, so we’ll be able to open it up for traffic. The first thing we’ll try to get through will be an ambulance, but I can’t guarantee there will be —,” the officer said before the video suddenly ended.
A picture snapped by Clifford showed medics attending to the woman as they waited for the road to reopen.
— Carrie Clifford (@CarrieClifford) July 23, 2014
Clifford told TheBlaze she left the scene before the incident was resolved. According to KNBC, at last check, the baby had still not yet arrived, and KTLA-TV reported the same thing Thursday morning.
This post has been updated.
Follow Oliver Darcy (@oliverdarcy) on Twitter
Like this story? Sign up for "Firewire" to get daily alerts about hot news on TheBlaze.
Read more stories from TheBlaze
One of the great dangers of historical analysis is applying our modern standards and ex post facto knowledge to analysis of historical decisions. For example, I see modern students all the time assume that the Protestant Reformation was about secularization, because that is how we think about religious reform and the tide of trends that were to follow a century or two later. But tell John Calvin's Geneva it was about secularization and they would have looked at you like you were nuts (If they didn't burn you). Ditto we bring our horror for nuclear arms developed in the Cold War and apply it to decision-makers in WWII dropping the bomb on Hiroshima. I don't think there is anything harder in historical analysis than shedding our knowledge and attitudes and putting ourselves in the relevant time.
Believe it or not, it does not take 300 or even 50 years for these problems to manifest themselves. They can occur in just four. Take the recent Halbig case, one of a series of split decisions on the PPACA and whether IRS rules to allow government subsidies of health care policies in Federal exchanges are consistent with that law.
The case, Halbig v. Burwell, involved the availability of subsidies on federally operated insurance marketplaces. The language of the Affordable Care Act plainly says that subsidies are only available on exchanges established by states. The plaintiff argued this meant that, well, subsidies could only be available on exchanges established by states. Since he lives in a state with a federally operated exchange, his exchange was illegally handing out subsidies.
The government argued that this was ridiculous; when you consider the law in its totality, it said, the federal government obviously never meant to exclude federally operated exchanges from the subsidy pool, because that would gut the whole law. The appeals court disagreed with the government, 2-1. Somewhere in the neighborhood of 5 million people may lose their subsidies as a result.
This result isn’t entirely shocking. As Jonathan Adler, one of the architects of the legal strategy behind Halbig, noted today on a conference call, the government was unable to come up with any contemporaneous congressional statements that supported its view of congressional intent, and the statutory language is pretty clear. Members of Congress have subsequently stated that this wasn’t their intent, but my understanding is that courts are specifically barred from considering post-facto statements about intent.
We look at what we know NOW, which is that Federal health care exchanges operate in 37 states, and that the Federal exchange serves more customers than all the other state exchanges combined. So, with this knowledge, we declare that Congress could not possibly meant to have denied subsidies to more than half the system.
But this is an ex-post-facto, fallacious argument. The key is "what did Congress expect in 2010 when the law was passed", and it was pretty clear that Congress expected all the states to form exchanges. In fact, the provision of subsidies only in state exchanges was the carrot Congress built in to encourage states to form exchanges. (Since Congress could not actually mandate states form exchanges, it has to use such financial carrots and stick. Congress does this all the time, all the way back to seat belt and 55MPH speed limit mandates that were forced on states at the threat of losing state highway funds. The Medicaid program has worked this way with states for years -- and the Obamacare Medicare changes follow exactly this template of Feds asking states to do something and providing incentives for them to do so in the form of Federal subsidies). Don't think of the issue as "not providing subsidies in federal exchanges." That is not how Congress would have stated it at the time. Think of it as "subsidies are not provided if the state does not build an exchange". This was not a bug, it was a feature. Drafters intended this as an incentive for creating exchanges. That they never imagined so many would not create exchanges does not change this fact.
It was not really until 2012 that anyone even took seriously the idea that states might not set up exchanges. Even as late as December 2012, the list was only 17 states, not 37. And note from the linked article the dissenting states' logic -- they were refusing to form an exchange because it was thought that the Feds could not set one up in time. Why? Because the Congress and the Feds had not planned on the Federal exchanges serving very many people. It had never been the expectation or intent.
If, in 2010, on the day after Obamacare had passed, one had run around and said "subsidies don't apply in states that do not form exchanges" the likely reaction would not have been "WHAT?!" but "Duh." No one at the time would have thought that would "gut the whole law."
Postscript: By the way, note how dangerous both the arguments are that opponents of Halbig are using
Overstock CEO Patrick Byrne shared what he believes is the succinct history of the United States during an interview with Glenn Beck that aired Tuesdsay.
“The real story of this country can be told very succinctly,” Byrne remarked. “For about 150 years, it worked, the constitutional principles worked. And then in the 1930s, as all this power got shifted to Washington, we all figured out that you can go and, instead of competing, you can go and lobby and get checks written on the account of other people.”
Overstock CEO Patrick Byrne speaks on the Glenn Beck Program July 17, 2014. The second half of the interview aired on July 22. (Photo: TheBlaze TV)
Byrne said the United States did that for “about 50 years” when, in the 1980s, “everybody had gotten organized into groups,” both to lobby for other people’s money and to prevent their own interests from being harmed.
“Then we all found one group of people that we can write checks on their account, and they can’t stop us,” Byrne remarked. “That’s the group of future human beings that can never organize to stop us.”
“For about 30 years, we’ve been writing checks on the bank account of the future,” Byrne concluded. “Whether you’re talking about the environment or Social Security or Medicare, we’ve just written checks on the bank account of the future. And now the future has shown up, and life sucks.”
You can watch the complete interview below, which also includes a discussion of America’s “monoculture”:
The full episode of The Glenn Beck Program, along with many other live-streaming shows and thousands of hours of on-demand content, is available on just about any digital device. Click here to watch every Glenn Beck episode from the past 30 days for just $1!
Read more stories from TheBlaze
"Remy: What are the Chances? (An IRS Love Song)" is the latest from Reason TV. Watch above or click the link below for full lyrics, links, downloadable versions, and more.
Two years ago I participated in an NEH summer seminar for political philosophers. This was during the campaign for the 2012 Presidential election. One evening over drinks, I asked the others (15 or so philosophers from around the country) whether they had ever contributed any money to a political campaign. It turned out that everyone at the table but me had contributed to the Obama campaign that year.
As anyone who has spent some time in academia knows, this is hardly atypical. Many academics (philosophers and non-philosophers) spend considerable amounts of time and money on political activism. They vote (duh), put signs in their yard, attend party rallies, and so on. Heck, at my school “community-engaged scholarship” is now among the conditions of tenure.
Around the same time, I was reading Daniel Kahneman’s book Thinking Fast and Slow and Jonathan Haidt’s The Righteous Mind. Both books discuss the ways in which partisanship can bias our thinking. And so I started worrying about this. Because, as anyone who has spent some time in academia also knows, academics (philosophers included) are hardly the most ideologically diverse group. The ideological spectrum ranges roughly from left to extreme left. For a field that is supposed to think openly, critically, and honestly about the nature and purpose of politics, this is not a healthy state of affairs. The risk of people confirming one another’s preconceptions, or worse, trying to one-up each other, is simply too great.
(By the way, it’s likely that the risk is at least somewhat of a reality. I know of many libertarians who think that the level of argument and rigor that reviewers demand of their arguments is not quite the same as what is demanded of arguments for egalitarian conclusions. That is anecdotal evidence. For other fields, there is more robust empirical evidence. Psychologists Yoel Inbar and Joris Lammers have found that in their field ideological bias is very much a real thing.)
I mention this episode because it had a significant effect on how I think about the responsibilities of being a philosopher. I now think it is morally wrong for philosophers, and other academics who engage in politically relevant work, to be politically active (yes, you read that correctly).
The argument for this conclusion is, I think, startlingly simple. I develop it in detail in a now forthcoming paper In Defense of the Ivory Tower: Why Philosophers Should Stay out of Politics. Here is a quick summary of the argument:
I have given this paper at a number of universities, and I have found that a lot of people are very resistant to the conclusion (to say the least). But each of the argument’s premises is true, I think, and so the conclusion must be true as well.
Lots of people resist premise (3). But that is really not up for debate. It is an empirical question whether political activism harms our ability to seek the truth about politics. And the empirical evidence is just overwhelming: it does. (You can find a bunch of cites in the paper, in addition to Haidt and Kahneman.)
Over at The Philosophers’ Cocoon, Marcus Arvan offers a different objection. He says he disagrees with premise (2), but his real objection is actually a bit different. Marcus suggests that there can be permissible trade-offs between activism and scholarship, such that surely a teensy little tiny bit of activism is surely okay, even if it harms our scholarship. It is too simple, Marcus suggests, to say that we should forgo activism if it makes us worse at philosophy.
I don’t find this a powerful objection. Here is the reply I give in the paper, and it still seems plausible to me. The reason people want to be activist is that they want to make the world a better place. That’s cool – I want that too. But there are many, many ways to achieve this. And activism is but one of these. (It is also, I should add, a really inefficient way.) My point, then, is simple: if philosophers (and other academics) want to make the world a better place, they should do it in ways that do not make them bad at their jobs. That means they should do it without political activism.
So the argument stands, I think. But Marcus ends with a good question. What the hell am I doing on a blog with the word libertarians in its name? If political affiliations harm our ability to seek the truth, and seek the truth we must, then am I not being irresponsible as well? And he is right, there is a real risk in this. By self-labeling as a libertarian, I risk becoming biased in favor of certain arguments, premises, and conclusions, and against others. And that, to be sure, is something I want to avoid.
The honest answer is that I thought hard about it when I was asked to join the blog. (My wife asked the same question as Marcus did when I told her I was thinking of joining.) I decided that there was little additional risk to joining. For one, I have always seen myself as a reluctant libertarian. I grew up a Rawlsian and slowly moved away from those views toward more libertarian views. But I never became an “in the fold” kind of guy. So I apply the label only partially to myself. On the other hand, I am pretty deeply convinced of a number of things that will inevitably put me in a libertarian (or libertarian-like) camp. And this is something I know. So insofar as I do apply the label “libertarian” to myself, joining the blog didn’t add much to it.
Or so I told myself. But that is, of course, exactly the sort of things that a biased person will tell himself. I am aware of that. What won the day, finally, was that the blog has no “party-line.” We have people here who defend basic income, parental licensing, Israel, Palestine, and lord knows what other view will come up next. We are a weird bunch. And I like the blog because of this. I think it helps show people just how diverse, and intellectually rich the libertarian part of the conversation is (or can be). It helps me stay on my toes. And I wanted to contribute to that. So here I am.
Perhaps that was a mistake. I am open to persuasion. I made pretty radical changes to my life after becoming convinced of my thesis of non-activism. I no longer follow the political news, I have tried to distance myself from any sympathies I might have had for parties, movements or politicians (that one was easy), and so on. I highly recommend it. But maybe I didn’t go quite far enough. If someone can convince me, I’ll leave. Take your best shot.
Here's a new way to identify individual computers over the Internet. The page instructs the browser to draw an image. Because each computer draws the image slightly differently, this can be used to uniquely identify each computer. This is a big deal, because there's no way to block this right now.
EDITED TO ADD (7/22): This technique was first described in 2012. And it seems that NoScript blocks this. Privacy Badger probably blocks it, too.
EDITED TO ADD (7/23): EFF has a good post on who is using this tracking system -- the White House is -- and how to defend against it.
And a good story on BoingBoing.
IRS Deputy Associate Chief Counsel Thomas Kane said in transcribed congressional testimony that more IRS officials experienced computer crashes, bringing the total number of crash victims to "less than 20," and also said that the agency does not know if the lost emails are still backed up somewhere.
The new round of computer crash victims includes David Fish, who routinely corresponded with Lois Lerner, as well as Lerner subordinate Andy Megosh, Lerner's technical adviser Justin Lowe, and Cincinnati-based agent Kimberly Kitchens.
March 3, 1817: As his last official act as President, Madison vetoes a bill that would provide federal funding for building roads and canals throughout the United States. The President finds no expressed congressional power to fund roads and canals in the Constitution, and he believes that the federal government should not encroach upon matters delegated to state governments.
To the House of Representatives of the United States:
Having considered the bill this day presented to me entitled “An act to set apart and pledge certain funds for internal improvements,” and which sets apart and pledges funds “for constructing roads and canals, and improving the navigation of water courses, in order to facilitate, promote, and give security to internal commerce among the several States, and to render more easy and less expensive the means and provisions for the common defense,” I am constrained by the insuperable difficulty I feel in reconciling the bill with the Constitution of the United States to return it with that objection to the House of Representatives, in which it originated.
The legislative powers vested in Congress are specified and enumerated in the eighth section of the first article of the Constitution, and it does not appear that the power proposed to be exercised by the bill is among the enumerated powers, or that it falls by any just interpretation within the power to make laws necessary and proper for carrying into execution those or other powers vested by the Constitution in the Government of the United States.
“The power to regulate commerce among the several States” can not include a power to construct roads and canals, and to improve the navigation of water courses in order to facilitate, promote, and secure such a commerce without a latitude of construction departing from the ordinary import of the terms strengthened by the known inconveniences which doubtless led to the grant of this remedial power to Congress.
To refer the power in question to the clause “to provide for the common defense and general welfare” would be contrary to the established and consistent rules of interpretation, as rendering the special and careful enumeration of powers which follow the clause nugatory and improper. Such a view of the Constitution would have the effect of giving to Congress a general power of legislation instead of the defined and limited one hitherto understood to belong to them, the terms “common defense and general welfare” embracing every object and act within the purview of a legislative trust. It would have the effect of subjecting both the Constitution and laws of the several States in all cases not specifically exempted to be superseded by laws of Congress, it being expressly declared “that the Constitution of the United States and laws made in pursuance thereof shall be the supreme law of the land, and the judges of every State shall be bound thereby, anything in the constitution or laws of any State to the contrary notwithstanding.” Such a view of the Constitution, finally, would have the effect of excluding the judicial authority of the United States from its participation in guarding the boundary between the legislative powers of the General and the State Governments, inasmuch as questions relating to the general welfare, being questions of policy and expediency, are unsusceptible of judicial cognizance and decision.
A restriction of the power “to provide for the common defense and general welfare” to cases which are to be provided for by the expenditure of money would still leave within the legislative power of Congress all the great and most important measures of Government, money being the ordinary and necessary means of carrying them into execution.
If a general power to construct roads and canals, and to improve the navigation of water courses, with the train of powers incident thereto, be not possessed by Congress, the assent of the States in the mode provided in the bill can not confer the power. The only cases in which the consent and cession of particular States can extend the power of Congress are those specified and provided for in the Constitution.
I am not unaware of the great importance of roads and canals and the improved navigation of water courses, and that a power in the National Legislature to provide for them might be exercised with signal advantage to the general prosperity. But seeing that such a power is not expressly given by the Constitution, and believing that it can not be deduced from any part of it without an inadmissible latitude of construction and a reliance on insufficient precedents; believing also that the permanent success of the Constitution depends on a definite partition of powers between the General and the State Governments, and that no adequate landmarks would be left by the constructive extension of the powers of Congress as proposed in the bill, I have no option but to withhold my signature from it, and to cherishing the hope that its beneficial objects may be attained by a resort for the necessary powers to the same wisdom and virtue in the nation which established the Constitution in its actual form and providently marked out in the instrument itself a safe and practicable mode of improving it as experience might suggest.
Ever since innovative ride-sharing services like Uber and Lyft started gaining popularity, people have made the intuitive assertion that these services could cut down on drinking and driving. People will choose an affordable, safe alternative to drunk driving if that alternative is readily available.
Just a few weeks ago, Pittsburgh resident Nate Good published a quick study that offered the first hard evidence that DUI rates may be decreasing in cities where Uber is popular. An analysis of Philadelphia's data showed an 11.1 percent decrease in the rate of DUIs since ridesharing services were made available, and an even more astonishing 18.5 percent decrease for people under 30.
As everyone knows, however, correlation does not equal causation. Good's quick number-crunching was too simplistic to draw any overarching conclusions, but it did open the door for future studies. A recent, deeper analysis from Uber makes the case even stronger that ridesharing services may be responsible for a decline in DUIs.
The first thing Uber did was use its own data to see if people disproportionately called for Uber cars from bars in comparison to other venues. And indeed:
Requests for rides come from Uber users at bars at a much higher rate than you might expect based on the number of bars there are in the city. The fraction of requests from users at bars are between three and five times greater than the total share of bars.
Next, they used government data to find out when deaths from DUIs are most likely to occur. Fatalities due to drunk driving start to peak at midnight, are the highest from 12:00-3:00 AM, and happen much more often on the weekends. Uber then gathered their own internal data and found that Uber transactions spiked at the times when people are most likely to drink and drive (as depicted in the chart above).
There remains plenty of room for more studies on how Uber is affecting transportation trends. But early evidence for a positive impact—an impact that goes far beyond mere consumer convenience—is already compelling.
Chicago Tribune reporters David Kidwell and Alex Richards have put together a massive investigation documenting huge problems causing the city’s red light cameras to send out thousands of tickets to innocent drivers. Today they report that after a bunch of cameras stopped giving out any tickets for a couple of days (suggesting possible downtime and perhaps some sort of fiddling), they suddenly went berserk, giving out dozens of tickets a day:
Cameras that for years generated just a few tickets daily suddenly caught dozens of drivers a day. One camera near the United Center rocketed from generating one ticket per day to 56 per day for a two-week period last summer before mysteriously dropping back to normal.
Tickets for so-called rolling right turns on red shot up during some of the most dramatic spikes, suggesting an unannounced change in enforcement. One North Side camera generated only a dozen tickets for rolling rights out of 100 total tickets in the entire second half of 2011. Then, over a 12-day spike, it spewed 563 tickets — 560 of them for rolling rights.
Many of the spikes were marked by periods immediately before or after when no tickets were issued — downtimes suggesting human intervention that should have been documented. City officials said they cannot explain the absence of such records.
City officials seem to be unable to explain anything at all, even as traffic courts buck typical behavior and have reversed nearly half the tickets appealed from one such spike. Transportation officials claim they didn’t even know it was happening until the reporters told them.
Oh, also of note: The company (which is supposed to inform the city of any such spikes) and the city’s program are under federal investigation for corruption. The company, Redflex Traffic Systems Inc., is accused of bribing a city official to the tune of millions in order to land the contract. The Chicago Tribune reported last year how the controversy caused Mayor Rahm Emanuel to disqualify Redflex from a new contract putting up speed cameras near schools and parks to increase revenue safety.
The Tribune notes that these traffic cameras have generated nearly $500 million in revenue since the program began in 2003, yet everybody in the lengthy story seems to dance around the idea that the city or Redflex could have any sort of incentive to make alterations to cause the system to suddenly start spitting out tickets. Chicago’s CBS affiliate noted last fall that the city’s budget for 2014 relied on revenue from its red light cameras (and the highest cigarette taxes in the nation) for revenue in order to balance.