Shared posts

25 Jul 17:53

Glenn Beck Debunks Ice Cream Sandwich Story With In-House Science Experiment

by Erica Ritz

Glenn Beck and radio co-hosts Pat Gray and Stu Burguiere conducted a little in-house “science experiment” on Friday after reading about an ice cream sandwich that seemingly didn’t melt after being left outside for hours.

“So [WCPO-TV] did an experiment with Haagen-Dazs, which melted quickly into a puddle because that’s, you know, real stuff,” Gray explained. “The Klondike sandwich melted to a fair extent, they said. The Walmart sandwich, though it melted a bit, remained the most solid in appearance and still looked like a sandwich. And I’m looking at the photo of it. They don’t say at what hour this was taken, but it’s pretty well intact.”

Image source: WCPO-TV

Image source: WCPO-TV

Burguiere didn’t seem particularly disturbed that there may be chemicals and preservatives in his food, saying sarcastically: “I mean, I know there are piles of bodies outside of every single hospital in America because of the deadly Walmart ice cream sandwiches…”

“Come on, you know that the crap we put in our food is killing us,” Beck responded.

“I don’t believe that at all,” Burguiere said. “Not at all. The word ‘natural’ does not mean you’re going to be healthy. The word ‘chemicals’ does not mean you’re going to be sick.”

More from the discussion below:

Complimentary Clip from TheBlaze TV

The group decided to re-create the experiment using three different types of ice cream sandwiches: a Blue Bunny ice cream sandwich and two they picked up from Walmart, one of which was 97 percent fat-free.

After just one hour, all three ice cream sandwiches had melted in just 77-degree weather.

Glenn Beck re-creates WCPO-TV's ice cream sandwich experiment July 25, 2014. (Photo: TheBlaze TV)

Glenn Beck re-creates WCPO-TV’s ice cream sandwich experiment July 25, 2014. (Photo: TheBlaze TV)

“They all melted,” Beck said, adding: “The Walmart sandwich that is not fat-free looks like it’s made out of all fake stuff, which it’s not. The one made out of fake stuff looks yummy…”

More from the “experiment” below:

Complimentary Clip from TheBlaze TV

The full episode of The Glenn Beck Program, along with many other live-streaming shows and thousands of hours of on-demand content, is available on just about any digital device. Click here to watch every Glenn Beck episode from the past 30 days for just $1!

Read more stories from TheBlaze

Actor on One of TV’s Most Popular Shows Reveals Why He Refused to Take a Picture With Rick Santorum

White House: Republicans May Push to Impeach Obama If They Get the Chance

GoPro Video Captures the Moment a ‘Careless Tailgater’ Gets Served a Dose of ‘Instant Justice’

‘An Act of Cowardice and a Betrayal’: GOP Congressman Slammed for Gay Marriage ‘Flip-Flop’

Video: Hamas Terrorists Fire at IDF Forces From Inside Hospital — Israel Makes Them Regret It After Confirming No Civilians Inside

24 Jul 14:00

New Paper by McKitrick and Vogelsang comparing models and observations in the tropical troposphere

by Ross McKitrick

This is a guest post by Ross McKitrick. Tim Vogelsang and I have a new paper comparing climate models and observations over a 55-year span (1958-2012) in the tropical troposphere. Among other things we show that climate models are inconsistent with the HadAT, RICH and RAOBCORE weather balloon series. In a nutshell, the models not only predict far too much warming, but they potentially get the nature of the change wrong. The models portray a relatively smooth upward trend over the whole span, while the data exhibit a single jump in the late 1970s, with no statistically significant trend either side.

Our paper is called “HAC-Robust Trend Comparisons Among Climate Series With Possible Level Shifts.” It was published in Environmetrics, and is available with Open Access thanks to financial support from CIGI/INET. Data and code are here and in the paper’s SI.

Tropical Troposphere Revisited

The issue of models-vs-observations in the troposphere over the tropics has been much-discussed, including here at CA. Briefly to recap:

  • All climate models (GCMs) predict that in response to rising CO2 levels, warming will occur rapidly and with amplified strength in the troposphere over the tropics. See AR4 Figure 9.1  and accompanying discussion; also see AR4 text accompanying Figure 10.7.
  • Getting the tropical troposphere right in a model matters because that is where most solar energy enters the climate system, where there is a high concentration of water vapour, and where the strongest feedbacks operate. In simplified models, in response to uniform warming with constant relative humidity, about 55% of the total warming amplification occurs in the tropical troposphere, compared to 10% in the surface layer and 35% in the troposphere outside the tropics. And within the tropics, about two-thirds of the extra warming is in the upper layer and one-third in the lower layer. (Soden & Held  p. 464).
  • Neither weather satellites nor radiosondes (weather balloons) have detected much, if any, warming in the tropical troposphere, especially compared to what GCMs predict. The 2006 US Climate Change Science Program report (Karl et al 2006) noted this as a “potentially serious inconsistency” (p. 11). I suggest is now time to drop the word “potentially.”
  • The missing hotspot has attracted a lot of discussion at blogs (eg and among experts (eg There are two related “hotspot” issues: amplification and sensitivity. The first refers to whether the ratio of tropospheric to surface warming is greater than 1, and the second refers to whether there is a strong tropospheric warming rate. Our analysis focused in the sensitivity issue, not the amplification one. In order to test amplification there has to have been a lot of warming aloft, which turns out not to have been the case. Sensitivity can be tested directly, which is what we do, and in any case is the more relevant question for measuring the rate of global warming.
  • In 2007 Douglass et al. published a paper in the IJOC showing that models overstated warming trends at every layer of the tropical troposphere. Santer et al. (2008) replied that if you control for autocorrelation in the data the trend differences are not statistically significant. This finding was very influential. It was relied upon by the EPA when replying to critics of their climate damage projections in the Technical Support Document behind the “endangerment finding”, which was the basis for their ongoing promulgation of new GHG regulations. It was also the basis for the Thorne et al. survey’s (2011) conclusion that “there is no reasonable evidence of a fundamental disagreement between models and observations” in the tropical troposphere.
  • But for some reason Santer et al truncated their data at 1999, just at the end of a strong El Nino. Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data they’d get a very different result, namely a significant overprediction by models. The IJOC would not publish our comment.
  • I later redid the analysis using the full length of available data, applying a conventional panel regression method and a newer more robust trend comparison methodology, namely the non-parametric HAC (heteroskedasticity and autocorrelation)-robust estimator developed by econometricians Tim Vogelsang and Philip Hans Franses (VF2005). I showed that over the 1979-2009 interval climate models on average predict 2-4x too much warming in the tropical lower- and mid- troposphere (LT, MT) layers and the discrepancies were statistically significant. This paper was published as MMH2010 in Atmospheric Science Letters
  • In the AR5, the IPCC is reasonably forthright on the topic (pp. 772-73). They acknowledge the findings in MMH2010 (and other papers that have since confirmed the point) and conclude that models overstated tropospheric warming over the satellite interval (post-1979). However they claim that most of the bias is due to model overestimation of sea surface warming in the tropics. It’s not clear from the text where they get this from. Since the bias varies considerably among models, it seems to me likely to be something to do with faulty parameterization of feedbacks. Also the problem persists even in studies that constrain models to observed SST levels.
  • Notwithstanding the failure of models to get the tropical troposphere right, when discussing fidelity to temperature trends the SPM of the AR5 declares Very High Confidence in climate models (p. 15). But they also declare low confidence in their handling of clouds (p. 16), which is very difficult to square with their claim of very high confidence in models overall. They seem to be largely untroubled by trend discrepancies over 10-15 year spans (p. 15). We’ll see what they say about 55-year discrepancies.


The Long Balloon Record

After publishing MMH2010 I decided to extend the analysis back to the start of the weather balloon record in 1958. I knew that I’d have to deal with the Pacific Climate Shift in the late 1970s. This is a well-documented phenomenon (see ample references in the paper) in which a major reorganization of ocean currents induced a step-change in a lot of temperature series around the Pacific rim, including the tropospheric weather balloon record. Fitting a linear trend through a series with a positive step-change in the middle will bias the slope coefficient upwards. When I asked Tim if the VF method could be used in an application allowing for a suspected mean shift, he said no, it would require derivation of a new asymptotic distribution and critical values, taking into account the possibility of known or unknown break points. He agreed to take on the theoretical work and we began collaborating on the paper.

Much of the paper is taken up with deriving the methodology and establishing its validity. For readers who skip that part and wonder why it is even necessary, the answer is that in serious empirical disciplines, that’s what you are expected to do to establish the validity of novel statistical tools before applying them and drawing inferences.

Our paper provides a trend estimator and test statistic based on standard errors that are valid in the presence of serial correlation of any form up to but not including unit roots, that do not require the user to choose tuning parameters such as bandwidths and lag lengths, and that are robust to the possible presence of a shift term at a known or unknown break point. In the paper we present various sets of results based on three possible specifications: (i) there is no shift term in the data, (ii) there is a shift at a known date (we picked December 1977) and (iii) there is a possible shift term but we do not know when it occurs.



  • All climate models but one characterize the 1958-2012 interval as having a significant upward trend in temperatures. Allowing for a late-1970s step change has basically no effect in model-generated series. Half the climate models yield a small positive step and half a small negative step, but all except two still report a large, positive and significant trend around it. Indeed in half the cases the trend becomes even larger once we allow for the step change. In the GCM ensemble mean there is no step-change in the late 1970s, just a large, uninterrupted and significant upward trend.
  • Over the same interval, when we do not control for a step change in the observations, we find significant upward trends in tropical LT and MT temperatures, though the average observed trend is significantly smaller than the average modeled trend.
  • When we allow for a late-1970s step change in each radiosonde series, all three assign most of the post-1958 increase in both the LT and MT to the step change, and the trend slopes become essentially zero.
  • Climate models project much more warming over the 1958-2012 interval than was observed in either the LT or MT layer, and the inconsistency is statistically significant whether or not we allow for a step-change, but when we allow for a shift term the models are rejected at smaller significance levels.
  • When we treat the break point as unknown and allow a data-mining process to identify it, the shift term is marginally significant in the LT and significant in the MT, with the break point estimated in mid-1979.

When we began working on the paper a few years ago, the then-current data was from the CMIP3 model library, which is what we use in the paper. The AR5 used the CMIP5 library so I’ll generate results for those runs later, but for now I’ll discuss the CMIP3 results.

We used 23 CMIP3 models and 3 observational series. This is Figure 4 from our paper (click for larger version):


Each panel shows the trend terms (oC/decade) and HAC-robust confidence intervals for CMIP3 models 1—23 (red) and the 3 weather balloon series (blue). The left column shows the case where we don’t control for a step-change. The right column shows the case where we do, dating it at December 1977. The top row is MT, the bottom row is LT.

You can see that the model trends remain about the same with or without the level shift term, though the confidence intervals widen when we allow for a level shift. When we don’t allow for a level shift (left column), all 6 balloon series exhibit small but significant trends. When we allow for a level shift (right column), placing it at 1977:12, all observed trends become very small and statistically insignificant. All but two models (GFDL 2.0 (#7) and GFDL2.1 (#8)) yield positive and significant trends either way.

Mainstream versus Reality

Figure 3 from our paper (below) shows the model-generated temperature data, mean GCM trend (red line) and the fitted average balloon trend (blue dashed line) over the sample period. In all series (including all the climate models) we allow a level shift at 1977:12. Top panel: MT; bottom panel: LT.


The dark red line shows the trend in the model ensemble mean. Since this displays the central tendency of climate models we can take it to be the central tendency of mainstream thinking about climate dynamics, and, in particular, how the climate responds to rising GHG forcing. The dashed blue line is the fitted trend through observations; i.e. reality. For my part, given the size and duration of the discrepancy, and the fact that the LT and MT trends are indistinguishable from zero, I do not see how the “mainstream” thinking can be correct regarding the processes governing the overall atmospheric response to rising CO2 levels. As the Thorne et al. review noted, a lack of tropospheric warming “would have fundamental and far-reaching implications for understanding of the climate system.”

Figures don’t really do justice to the clarity of our results: you need to see the numbers. Table 7 summarizes the main test scores on which our conclusions are drawn.



The first column indicates the data series being tested. The second column lists the null hypothesis. The third column gives the VF score, but note that this statistic follows a non-standard distribution and critical values must either be simulated or bootstrapped (as discussed in the paper). The last column gives the p-value.

The first block reports results with no level shift term included in the estimated models. The first 6 rows shows the 3 LT trends (with the trend coefficient in C/decade in brackets) followed by the 3 MT trends. The test of a zero trend strongly rejects in each case (in this case the 5% critical value is 41.53 and 1% is 83.96). The next two rows report tests of average model trend = average observed trend. These too reject, even ignoring the shift term.

The second block repeats these results with a level shift at 1977:12. Here you can see the dramatic effect of controlling for the Pacific Climate Shift. The VF scores for the zero-trend test collapse and the p-values soar; in other words the trends disappear and become practically and statistically insignificant. The model/obs trend equivalence tests strongly reject again.

The next two lines show that the shift terms are not significant in this case. This is partly because shift terms are harder to identify than trends in time series data.

The final section of the paper reports the results when we use a data-mining algorithm to identify the shift date, adjusting the critical values to take into account the search process. Again the trend equivalence tests between models and observations reject strongly, and this time the shift terms become significant or weakly significant.

We also report results model-by-model in the paper. Some GCMs do not individually reject, some always do, and for some it depends on the specification. Adding a level shift term increases the VF test scores but also increases the critical values so it doesn’t always lead to smaller p-values.


Why test the ensemble average and its distribution?

The IPCC (p. 772) says the observations should be tested against the span of the entire ensemble of model runs rather than the average. In one sense we do this: model-by-model results are listed in the paper. But we also dispute this approach since the ensemble range can be made arbitrarily wide simply by adding more runs with alternative parameterizations. Proposing a test that requires data to fall outside a range that you can make as wide as you like effectively makes your theory unfalsifiable. Also, the IPCC (and everyone else) talks about climate models as a group or as a methodological genre. But it doesn’t provide any support for the genre to observe that a single outlying GCM overlaps with the observations, while all the others tend to be far away. Climate models, like any models (including economic ones) are ultimately large, elaborate numerical hypotheses: if the world works in such a way, and if the input variables change in such-and-such a way, then the following output variables will change thus and so. To defend “models” collectively, i.e. as a related set of physical hypotheses about how the world works, requires testing a measure of their central tendency, which we take to be the ensemble mean.

In the same way James Annan dismissed the MMH2010 results, saying that it was meaningless to compare the model average to the data. His argument was that some models also reject when compared to the average model, and it makes no sense to say that models are inconsistent with models, therefore the whole test is wrong. But this is a non sequitur. Even if one or more individual models are such outliers that they reject against the model average, this does not negate the finding that the average model rejects against the observed data. If the central tendency of models is to be significantly far away from reality, the central tendency of models is wrong, period. That the only model which reliably does not reject against the data (in this case GFDL 2.1) is an outlier among GCMs only adds to the evidence that the models are systematically biased.

There’s a more subtle problem in Annan’s rhetoric, when he says “Is anyone seriously going to argue on the basis of this that the models don’t predict their own behaviour?” In saying this he glosses over the distinction between a single outlier model and “the models” as group, namely as a methodological genre. To refer to “the models” as an entity is to invoke the assumption of a shared set of hypotheses about how the climate works. Modelers often point out that GCMs are based on known physics. Presumably the laws of physics are the same for everybody, including all modelers. Some climatic processes are not resolvable from first principles and have to be represented as empirical approximations and parameterizations, hence there are differences among specific models and specific model runs. The model ensemble average (and its variance) seems to me the best way to characterize the shared, central tendency of models. To the extent a model is an outlier from the average, it is less and less representative of models in general. So the average among the models seems as good a way as any to represent their central tendency, and indeed it would be difficult to conceive of any alternative.


Bottom Line

Over the 55-years from 1958 to 2012, climate models not only significantly over-predict observed warming in the tropical troposphere, but they represent it in a fundamentally different way than is observed. Models represent the interval as a smooth upward trend with no step-change. The observations, however, assign all the warming to a single step-change in the late 1970s coinciding with a known event (the Pacific Climate Shift), and identify no significant trend before or after. In my opinion the simplest and most likely interpretation of these results is that climate models, on average, fail to replicate whatever process yielded the step-change in the late 1970s and they significantly overstate the overall atmospheric response to rising CO2 levels.



22 Jul 15:06

Securing the Nest Thermostat

by schneier

A group of hackers are using a vulnerability in the Nest thermostat to secure it against Nest's remote data collection.

23 Jul 11:52

Lovejoy’s New Attempt To Show We Are Doomed Does Not Convince

by Matt Briggs
Lovejoy's new model.

Lovejoy’s new model.

We last met Shaun Lovejoy when he claimed that mankind caused global temperatures to increase. At the 99.9% level, of course.

He’s now saying that the increase which wasn’t observed wasn’t there because of natural variability. But, he assures us, we’re still at fault

His entire effort is beside the point. If the “pause” wasn’t predicted, then the models are bad and the theories that drive them probably false. It matters not whether such pauses are “natural” or not.

Tell me honestly. Is this sentence in Lovejoy’s newest peer-reviewed (“Return periods of global climate
fluctuations and the pause”, Geophysical Research Letters) foray science or politics? “Climate change deniers have been able to dismiss all the model results and attribute the warming to natural causes.”

The reason scientists like Yours Truly have dismissed the veracity of climate models is for the eminently scientific reason that models which cannot make skillful forecasts are bad. And this is so even if you don’t want them to be. Even if you love them. Even if the models are consonant with a cherished and desirable ideology.

Up to a constant, Lovejoy’s curious model says the global temperature is caused by climate sensitivity (at double CO2) times the log of the ratio of the time varying CO2 concentration, all plus the “natural” global temperature.

There is no such thing. I mean, there is no such thing as a natural temperature in the absence of mankind. This is because mankind, like every other plant and animal species ever, has been influencing the climate since its inception. Only a denier would deny this.

Follow me closely. Lovejoy believes he can separate out the effects of humans on temperature and thus estimate what the temperature would be were man not around. Forget that such a quantity is of no interest (to any human being), or that such a task is hugely complex. Such estimates are possible. But so are estimates of temperature assuming the plot from the underrated pre-Winning Charlie Sheen movie The Arrival is true.

Let Lovejoy say what he will of Tnat(t) (as he calls it). Since this is meant to be science, how do we verify that Lovejoy isn’t talking out of his chapeau? How do we verify his conjectures? For that is all they are, conjectures. I mean, I could create my own estimate of Tnat(t), and so you could you—and so could anybody. Statistics is a generous, if not a Christian, field. The rule of statistical modeling is, Ask and ye shall receive. How do we tell which estimate is correct?

Answer: we cannot.

But—there’s always a but in science—we might believe Lovejoy was on to something if, and only if, his odd model were able to predict new data, data he had never before seen. Has he done this?

Answer: he has not.

His Figure shown above (global temp) might be taken as a forecast, though. His model is a juicy increase. Upwards and onwards! Anybody want to bet that this is the course the future temperature will actually take? If it doesn’t, Lovejoy is wrong. And no denying it.

After fitting his “GCM-free methodology” model, Lovejoy calculates the chances of seeing certain features in Tnat(t), all of which are conditional on his model and the correctness of Tnat(t). Meaning, if his model is fantasia, so are the probabilities about Tnat(t).

Oh, did I mention that Lovejoy first smoothed his time series? Yup: “a 1-2-1 running filter” (see here and here for more on why not to do this).

Lovejoy concludes his opus with the words, “We may still be battling the climate skeptic arguments that the models are
untrustworthy and that the variability is mostly natural in origin.”

Listen: if the GCMs (not just Lovejoy’s curious entry) made bad forecasts, they are bad models. It matters not that they “missed” some “natural variability.” The point is they made bad forecasts. That means that misidentified whatever it was that caused the temperature to take the values it did. That may be “natural variability” or things done by mankind. But it must be something. It doesn’t even matter if Lovejoy’s model is right: the GCMs were wrong.

He says the observed “pause” “has a convincing statistical explanation.” It has Lovejoy’s explanation. But I, or you, could build your own model and show that the “pause” does not have a convincing statistical explanation.

Besides, who gives a fig-and-a-half for statistical explanations? We want causal explanations. We want to know why things happen. We already know that they happened.

15 Jun 15:15

Abram et al 2014 and the Southern Annular Mode

by Steve McIntyre

In today’s post, I will look at a new Naturemag climate reconstruction claiming unprecedentedness (h/t Bishop Hill): “Evolution of the Southern Annular Mode during the past millennium” (Abram et al Nature 2014, pdf). Unfortunately, it is marred by precisely the same sort of data mining and spurious multivariate methodology that has been repeatedly identified in Team paleoclimate studies.

The flawed reconstruction has been breathlessly characterized at the Conversation by Guy Williams, an Australian climate academic, as a demonstration that, rather than indicating lower climate sensitivity, the recent increase in Antarctic sea ice is further evidence that things are worse than we thought. Worse it seems than previously imagined even by Australian climate academics.

the apparent paradox of Antarctic sea ice is telling us that it [climate change] is real and that we are contributing to it. The Antarctic canary is alive, but its feathers are increasingly wind-ruffled.

A Quick Review of Multivariate Errors
Let me start by assuming that CA readers understand the basics of multivariate data mining. In an extreme case, if you do a multiple regression of a sine wave against a large enough network of white noise, you can achieve arbitrarily high correlations. (See an early CA post on this here discussing example from Phillips 1998.)

At the other extreme, if you really do have a network of proxies with a common signal, the signal is readily extracted through averaging without any ex post screening or correlation weighting with the target.

As discussed on many occasions, there are many seemingly “sensible” multivariate methods that produce spurious results when applied to modern trends. In our original articles on Mann et al 1998-1999, Ross and I observed that short-centered principal components on networks of red noise is strongly biased to the production of hockey sticks. A related effect is that screening large networks based on correlation to modern trends is also biased to the production of hockey sticks. This has been (more or less independently) observed at numerous climate blogs, but is little known in academic climate literature. (Ross and I noted the phenomenon in our 2009 PNAS comment on Mann et al 2008, citing an article by David Stockwell in an Australian mining newsletter, though the effect had been previously noted at CA and other blogs).

Weighting proxies by correlation to target temperature is the sort of thing that “makes sense” to climate academics, but is actually even worse than ex post correlation screening. It is equivalent to Partial Least Squares regression of the target against a network (e.g. here for a discussion). Any regression against a large number of predictors is vulnerable to overfitting, a phenomenon well understood with Ordinary Least Squares regression, but also applicable to Partial Least Squares regression. Hegerl et al 2007 (cited by Abram et al as an authority) explicitly weighted proxies by correlation to target temperature. See the CA post here for a comparison of methods.

If one unpacks the linear algebra of Mann et al 1998-1999, an enterprise thus far neglected in academic literature, one readily sees that its regression phase in the AD1400 and AD1000 steps boils down to weighting proxies by correlation to the target (see here) – this is different from the bias in the principal components step that has attracted more publicity.

At Climate Audit, I’ve consistently argued that relatively simple averaging can recover the “signal” from networks with a common signal (which, by definition “proxies” ought to have). I’ve argued in favor of working from large population networks of like proxies without ex post screening or ex post correlation weighting.

The Proxy Network of Abram et al 2014
Abram et al used a network of 25 proxies, some very short (5 begin only in the mid-19th century) with only 6 reaching back to AD1000, the start of their reconstruction. They calibrated this network to the target SAM index over a calibration period of 1957-1995 (39 years.)

The network consists of 14 South American tree ring chronologies, 1 South American lake pigment series, one ice core isotope series from the Antarctic Peninsula and 9 ice core isotope series from the Antarctic continent. The Antarctic and South American networks are both derived from the previous PAGES2K networks, using the subset of South American proxies located south of 30S. (This eliminates the Quelccaya proxies, both of which were used upside down in the PAGES2K South American reconstruction.)

Abram et al described their proxy selection as follows:

We also use temperature-sensitive proxy records for the Antarctic and South America continental regions [5 - PAGES2k] to capture the full mid-latitude to polar expression of the SAM across the Drake Passage transect. The annually resolved proxy data sets compiled as part of the PAGES2k database are published and publically available5. For the South American data set we restrict our use to records south of 30 S and we do not use the four shortest records that are derived from instrumental sources. Details of the individual records used here and their correlation with the SAM are given in Supplementary Table 1.

However, their network of 14 South American tree ring chronologies is actually the product of heavy prior screening of an ex ante network of 104 (!!) chronologies. (One of the ongoing methodological problems in this field is the failure of authors to properly account for prior screening and selection).

The PAGES2K South American network was contributed by Neukom, the co-lead author of Gergis et al 2012. Neukom’s multivariate work is an almost impenetrable maze of ex post screening and ex post correlation weighting. If Mannian statistics is Baroque, Neukom’s is Rococo. CA readers will recall that non-availability of data deselected by screening was an issue in Gergis et al. (CA readers will recall that David Karoly implausibly claimed that Neukom and Gergis “independently” discovered the screening error in Gergis et al 2012 on the same day that Jean S reported it at Climate Audit.) Although Neukom’s proxy network has become increasingly popular in multiproxy studies, I haven’t been able to parse his tree ring chronologies as Neukom has failed to archive much of the underlying data and refused to provide it when requested.

Neukom’s selection/screening of these 14 chronologies was done in Neukom et al 2011 (Clim Dyn) using a highly non-standard algorithm which rated thousands of combinations according to verification statistics. While not a regression method per se, it is an ex post method and, if eventually parsed, will be subject to similar considerations as regression method – the balloon is still being squeezed.

The Multivariate Methodology of Abram et al 2014
Abram et al used a methodology equivalent to the regression methodology of the AD1400 and AD1000 steps of Mann et al 1998-1999 – a methodology later used (unaware) in Hegerl et al 2007, who are cited by Abram et al.

In this methodology, proxies are weighted by their correlation coefficient with the resulting composite scaled to the target. Abram et al 2014 described their multivariate method as follows (BTW “CPS” normally refers to unweighted composites):

We employ the widely used composite plus scale (CPS) methodology [5- PAGES2K,11 - Jones et al 2009, 12 - Hegerl et al 2007] with nesting to account for the varying length of proxies making up the reconstruction. For each nest the contributing proxies were normalized relative to the AD 1957-1995 calibration interval…

The normalized proxy records were then combined with a weighting [12- Hegerl et al 2007] based on their correlation coefficient (r) with the SAM during the calibration interval (Supplementary Table 1). The combined record was then scaled to match the mean and standard deviation of the instrumental SAM index during the calibration interval. Finally, nests were spliced together to provide the full 1,008-year SAM reconstruction.

Although Abram et al (and their reviewers) were apparently unaware, this methodology is formally equivalent to MBH99 regression methodology and to Partial Least Squares regression. Right away, one can see potential calibration period overfitting perils when one is using a network of 25 proxies to fit over a calibration period of only 29 years. Such overfitting is particularly bad when proxies are flipped over (see another old CA post here – I am unaware of anything equivalent in academic climate literature).

The Abram/PAGES2K South American Tree Ring Network

The Abram/PAGES2K South American tree ring network is an almost classic example of what not to do. Below is an excerpt from their Supplementary Table 1 listing their South American proxies, together with their correlation (r) to the target SAM index and the supposed “probability” of the correlation:

Right away you should be able to see the absurdity of this table. The average correlation of chronologies in the tree ring network to the target SAM index is a Mannian -0.01, with correlations ranging from -0.289 to +0.184.

Thare’s an irony to the average correlation being so low. Villalba et al 2012, also in Nature Geoscience, also considered a large network of Patagonian tree ring chronologies (many of which were identical to Neukom et al 2011 sites), showing a very noticeable decline in ring widths over the 20th century (with declining precipitation) and a significant negative correlation to Southern Annular Mode (specifically discussed in the article). It appears to me that Neukom’s prior screening of South American tree ring chronologies according to temperature (reducing the network from 104 to 14) made the network much less suitable for reconstruction of Southern Annular Mode (which is almost certainly more clearly reflected in precipitation proxies.)

The distribution of correlation coefficients in Abram et al is inconsistent with the network being a network of proxies for SAM. Instead of an average correlation of ~0, a network of actual proxies should have a significant positive (or negative) correlation, and, in a “good” network of proxies of the same type (e.g. Patagonian tree ring chronologies), all correlations will have the same sign.

Nonetheless, Abram et al claim that chronologies with the most extreme correlation coefficients within the network (both positive and negative) are also the most “significant” (as measured by their p-value.) They obtained this perverse result as follows: the “significance” of their correlations “were assessed relative to 10000 simulations on synthetic noise series with the same power spectrum as the real data [31 - Ebisuzaki, J. Clim 1997]“. Thus both upward-trending and downward-trending series were assessed as more “significant” within the population of tree ring chronologies and given higher weighting in the reconstruction.

The statistical reference of Abram et al was designed for a different problem. Their calculations of significance are done incorrectly. Neither their network of tree ring chronologies nor their multivariate method is suitable for their task. The coefficients clearly show the unsuitability.

A reconstruction using the methods of Abram et al 2014, especially accumulating the previous screening of Neukom et al 2011, is completely worthless for estimating prior Southern Annular Mode. This is different from being “WRONG!”, the adjective that is too quickly invoked in some skeptic commentary.

Despite my criticism, I think that proxies along the longitudinal transect of South America are extremely important and that the BAS Antarctic Peninsula ice core isotope series from James Ross Island is of great importance (and that it meets virtually all CA criteria for an ex ante “good” proxy.)

However, Abram et al is about as far from a satisfactory analysis of such proxies as one can imagine. It is too bad that Naturemag appears unequal to identifying even elementary methodological errors in articles that claim unprecedentedness. Perhaps they should reflect on their choice of peer reviewers for paleoclimate articles.

24 Jul 06:21

Video: Woman in Labor ‘Not Allowed’ to Cross Street to Hospital Over Obama’s Impending Motorcade

by Oliver Darcy

Witnesses say a pregnant woman in labor was prevented by authorities from crossing a Los Angeles street to a hospital Wednesday because the road had been closed for President Barack Obama’s impending motorcade.

The unidentified woman was barred from walking the few hundred feet to the hospital for at least 30 minutes as authorities waited for the president’s motorcade to pass by, witness Carrie Clifford told TheBlaze early Thursday morning.

“I felt bad for her,” Clifford said. “It does happen when Obama comes to L.A. or I’m sure anywhere else. It paralyzes the city, it does make it complicated.”

“You can’t do the things you had set out to do because the president is in town,” she added.

KNBC-TV reporter Robert Kovacik posted footage on video-sharing website Instagram depicting the incident.

“Woman in labor on bench as motorcade passes,” he wrote, “not allowed to cross street to get to #CedarsSinai.”

A spokesperson for the LAPD declined to comment on the incident early Thursday morning and referred TheBlaze to the Secret Service. A spokesperson for the Secret Service did not immediately return TheBlaze’s phone call.

Video, however, captured by Kovacik shows an unidentified sergeant explaining the circumstances.

“As soon as we can — it looks like the motorcade is coming through right about now, so we’ll be able to open it up for traffic. The first thing we’ll try to get through will be an ambulance, but I can’t guarantee there will be —,” the officer said before the video suddenly ended.

A picture snapped by Clifford showed medics attending to the woman as they waited for the road to reopen.

A few people in scrubs are there now. Still no baby. Still no #Obama. Cc: @WehoDaily

— Carrie Clifford (@CarrieClifford) July 23, 2014

Clifford told TheBlaze she left the scene before the incident was resolved. According to KNBC, at last check, the baby had still not yet arrived, and KTLA-TV reported the same thing Thursday morning.
This post has been updated.

Follow Oliver Darcy (@oliverdarcy) on Twitter

Like this story? Sign up for "Firewire" to get daily alerts about hot news on TheBlaze.

Thank you for signing up!

Read more stories from TheBlaze

Arizona Inmate Was ‘Alive’ and ‘Gasping’ for Air More Than One Hour Into Execution, His Lawyers Say

Debbie Wasserman Schultz Fact-Checked Live on the Air — and It Doesn’t Work Out Well for Her

Video: Woman in Labor ‘Not Allowed’ to Cross Street to Hospital Over Obama’s Impending Motorcade

‘Biggest Knife in the Back’: What the U.S. Did Yesterday to Israel Has Glenn Beck Furious

Isolated Amazonian Tribe Makes Contact With Scientists, Then the Inevitable Happens

23 Jul 16:57

Halbig & Obamacare: Applying Modern Standards and Ex-Post-Facto Knowledge to Historical Analysis

by admin

h/t Jts5665

One of the great dangers of historical analysis is applying our modern standards and ex post facto knowledge to analysis of historical decisions.  For example, I see modern students all the time assume that the Protestant Reformation was about secularization, because that is how we think about religious reform and the tide of trends that were to follow a century or two later.  But tell John Calvin's Geneva it was about secularization and they would have looked at you like you were nuts (If they didn't burn you).  Ditto we bring our horror for nuclear arms developed in the Cold War and apply it to decision-makers in WWII dropping the bomb on Hiroshima.  I don't think there is anything harder in historical analysis than shedding our knowledge and attitudes and putting ourselves in the relevant time.

Believe it or not, it does not take 300 or even 50 years for these problems to manifest themselves.  They can occur in just four.  Take the recent Halbig case, one of a series of split decisions on the PPACA and whether IRS rules to allow government subsidies of health care policies in Federal exchanges are consistent with that law.

The case, Halbig v. Burwell, involved the availability of subsidies on federally operated insurance marketplaces. The language of the Affordable Care Act plainly says that subsidies are only available on exchanges established by states. The plaintiff argued this meant that, well, subsidies could only be available on exchanges established by states. Since he lives in a state with a federally operated exchange, his exchange was illegally handing out subsidies.

The government argued that this was ridiculous; when you consider the law in its totality, it said, the federal government obviously never meant to exclude federally operated exchanges from the subsidy pool, because that would gut the whole law. The appeals court disagreed with the government, 2-1. Somewhere in the neighborhood of 5 million people may lose their subsidies as a result.

This result isn’t entirely shocking. As Jonathan Adler, one of the architects of the legal strategy behind Halbig, noted today on a conference call, the government was unable to come up with any contemporaneous congressional statements that supported its view of congressional intent, and the statutory language is pretty clear. Members of Congress have subsequently stated that this wasn’t their intent, but my understanding is that courts are specifically barred from considering post-facto statements about intent.

We look at what we know NOW, which is that Federal health care exchanges operate in 37 states, and that the Federal exchange serves more customers than all the other state exchanges combined.  So, with this knowledge, we declare that Congress could not possibly meant to have denied subsidies to more than half the system.

But this is an ex-post-facto, fallacious argument.  The key is "what did Congress expect in 2010 when the law was passed", and it was pretty clear that Congress expected all the states to form exchanges.  In fact, the provision of subsidies only in state exchanges was the carrot Congress built in to encourage states to form exchanges. (Since Congress could not actually mandate states form exchanges, it has to use such financial carrots and stick.  Congress does this all the time, all the way back to seat belt and 55MPH speed limit mandates that were forced on states at the threat of losing state highway funds.  The Medicaid program has worked this way with states for years -- and the Obamacare Medicare changes follow exactly this template of Feds asking states to do something and providing incentives for them to do so in the form of Federal subsidies).  Don't think of the issue as "not providing subsidies in federal exchanges."  That is not how Congress would have stated it at the time.  Think of it as "subsidies are not provided if the state does not build an exchange".  This was not a bug, it was a feature.  Drafters intended this as an incentive for creating exchanges.  That they never imagined so many would not create exchanges does  not change this fact.

It was not really until 2012 that anyone even took seriously the idea that states might not set up exchanges.  Even as late as December 2012, the list was only 17 states, not 37.  And note from the linked article the dissenting states' logic -- they were refusing to form an exchange because it was thought that the Feds could not set one up in time.  Why?  Because the Congress and the Feds had not planned on the Federal exchanges serving very many people.  It had never been the expectation or intent.

If, in 2010, on the day after Obamacare had passed, one had run around and said "subsidies don't apply in states that do not form exchanges" the likely reaction would not have been "WHAT?!"  but "Duh."  No one at the time would have thought that would "gut the whole law."

Postscript:  By the way, note how dangerous both the arguments are that opponents of Halbig are using

  1. The implementation of these IRS regulations are so big and so far along that it would be disruptive to make them illegal.  This means that the Administration is claiming to have the power to do anything it wants as long as it does it faster than the courts can work and makes sure the program in question affects lots of people
  2. The courts should give almost unlimited deference to Administration interpretations of law.  This means, in effect, that the Administration rather than the Courts are the preferred and default interpreter of law.  Does this make a lick of sense?  Why have a judiciary at all?
23 Jul 03:50

Overstock CEO Shares the ‘Real Story of This Country’

by Erica Ritz

Overstock CEO Patrick Byrne shared what he believes is the succinct history of the United States during an interview with Glenn Beck that aired Tuesdsay.

“The real story of this country can be told very succinctly,” Byrne remarked. “For about 150 years, it worked, the constitutional principles worked. And then in the 1930s, as all this power got shifted to Washington, we all figured out that you can go and, instead of competing, you can go and lobby and get checks written on the account of other people.”

Overstock CEO Patrick Byrne speaks on the Glenn Beck Program July 17, 2014. (Photo: TheBlaze TV)

Overstock CEO Patrick Byrne speaks on the Glenn Beck Program July 17, 2014. The second half of the interview aired on July 22. (Photo: TheBlaze TV)

Byrne said the United States did that for “about 50 years” when, in the 1980s, “everybody had gotten organized into groups,” both to lobby for other people’s money and to prevent their own interests from being harmed.

“Then we all found one group of people that we can write checks on their account, and they can’t stop us,” Byrne remarked. “That’s the group of future human beings that can never organize to stop us.”

“For about 30 years, we’ve been writing checks on the bank account of the future,” Byrne concluded. “Whether you’re talking about the environment or Social Security or Medicare, we’ve just written checks on the bank account of the future. And now the future has shown up, and life sucks.”

You can watch the complete interview below, which also includes a discussion of America’s “monoculture”:

Complimentary Clip from TheBlaze TV

The full episode of The Glenn Beck Program, along with many other live-streaming shows and thousands of hours of on-demand content, is available on just about any digital device. Click here to watch every Glenn Beck episode from the past 30 days for just $1!

Read more stories from TheBlaze

One Woman Had Enough of Catcalling on the Street. Here’s What She Does to Her Hecklers.

ESPN’s Stephen A. Smith Leaves Co-Host ‘Utterly Shocked’ During Heated Debate Over Tony Dungy’s Comments on Openly Gay Player

Grandma Says She’s Been Going to the Beaches in Florida Since 1978, and She’s ‘Never Seen Anything Like This’

Major Gun Manufacturer Taking $45 Million Investment, 300 Jobs to Tennessee Over Fear of Gun Control

‘Very Aggressive,’ ‘Vicious’ Illegal Alien Teen Could Be Set Free by Feds

21 Jul 16:01

Remy: What are the Chances? (An IRS Love Song)

by Remy

"Remy: What are the Chances? (An IRS Love Song)" is the latest from Reason TV. Watch above or click the link below for full lyrics, links, downloadable versions, and more. 

View this article.

23 Jul 02:57

Why Philosophers Should Stay Out of Politics

by Bas van der Vossen

Two years ago I participated in an NEH summer seminar for political philosophers. This was during the campaign for the 2012 Presidential election. One evening over drinks, I asked the others (15 or so philosophers from around the country) whether they had ever contributed any money to a political campaign. It turned out that everyone at the table but me had contributed to the Obama campaign that year.

As anyone who has spent some time in academia knows, this is hardly atypical. Many academics (philosophers and non-philosophers) spend considerable amounts of time and money on political activism. They vote (duh), put signs in their yard, attend party rallies, and so on. Heck, at my school “community-engaged scholarship” is now among the conditions of tenure.

Around the same time, I was reading Daniel Kahneman’s book Thinking Fast and Slow and Jonathan Haidt’s The Righteous Mind. Both books discuss the ways in which partisanship can bias our thinking. And so I started worrying about this. Because, as anyone who has spent some time in academia also knows, academics (philosophers included) are hardly the most ideologically diverse group. The ideological spectrum ranges roughly from left to extreme left. For a field that is supposed to think openly, critically, and honestly about the nature and purpose of politics, this is not a healthy state of affairs. The risk of people confirming one another’s preconceptions, or worse, trying to one-up each other, is simply too great.

(By the way, it’s likely that the risk is at least somewhat of a reality. I know of many libertarians who think that the level of argument and rigor that reviewers demand of their arguments is not quite the same as what is demanded of arguments for egalitarian conclusions. That is anecdotal evidence. For other fields, there is more robust empirical evidence. Psychologists Yoel Inbar and Joris Lammers have found that in their field ideological bias is very much a real thing.)

I mention this episode because it had a significant effect on how I think about the responsibilities of being a philosopher. I now think it is morally wrong for philosophers, and other academics who engage in politically relevant work, to be politically active (yes, you read that correctly).

The argument for this conclusion is, I think, startlingly simple. I develop it in detail in a now forthcoming paper In Defense of the Ivory Tower: Why Philosophers Should Stay out of Politics. Here is a quick summary of the argument:

  1. People who take up a certain role or profession thereby acquire a prima facie moral duty to make a reasonable effort to avoid those things that predictably make them worse at their tasks
  2. The task of political philosophers is to seek the truth about political issues
  3. Being politically active predictably makes us worse at seeking the truth about political issues
  4. Therefore, political philosophers have a prima faciemoral duty to avoid being politically active

I have given this paper at a number of universities, and I have found that a lot of people are very resistant to the conclusion (to say the least). But each of the argument’s premises is true, I think, and so the conclusion must be true as well.

Lots of people resist premise (3). But that is really not up for debate. It is an empirical question whether political activism harms our ability to seek the truth about politics. And the empirical evidence is just overwhelming: it does. (You can find a bunch of cites in the paper, in addition to Haidt and Kahneman.)

Over at The Philosophers’ Cocoon, Marcus Arvan offers a different objection. He says he disagrees with premise (2), but his real objection is actually a bit different. Marcus suggests that there can be permissible trade-offs between activism and scholarship, such that surely a teensy little tiny bit of activism is surely okay, even if it harms our scholarship. It is too simple, Marcus suggests, to say that we should forgo activism if it makes us worse at philosophy.

I don’t find this a powerful objection. Here is the reply I give in the paper, and it still seems plausible to me. The reason people want to be activist is that they want to make the world a better place. That’s cool – I want that too. But there are many, many ways to achieve this. And activism is but one of these. (It is also, I should add, a really inefficient way.) My point, then, is simple: if philosophers (and other academics) want to make the world a better place, they should do it in ways that do not make them bad at their jobs. That means they should do it without political activism.

So the argument stands, I think. But Marcus ends with a good question. What the hell am I doing on a blog with the word libertarians in its name? If political affiliations harm our ability to seek the truth, and seek the truth we must, then am I not being irresponsible as well? And he is right, there is a real risk in this. By self-labeling as a libertarian, I risk becoming biased in favor of certain arguments, premises, and conclusions, and against others. And that, to be sure, is something I want to avoid.

The honest answer is that I thought hard about it when I was asked to join the blog. (My wife asked the same question as Marcus did when I told her I was thinking of joining.) I decided that there was little additional risk to joining. For one, I have always seen myself as a reluctant libertarian. I grew up a Rawlsian and slowly moved away from those views toward more libertarian views. But I never became an “in the fold” kind of guy. So I apply the label only partially to myself. On the other hand, I am pretty deeply convinced of a number of things that will inevitably put me in a libertarian (or libertarian-like) camp. And this is something I know. So insofar as I do apply the label “libertarian” to myself, joining the blog didn’t add much to it.

Or so I told myself. But that is, of course, exactly the sort of things that a biased person will tell himself. I am aware of that. What won the day, finally, was that the blog has no “party-line.” We have people here who defend basic income, parental licensing, Israel, Palestine, and lord knows what other view will come up next. We are a weird bunch. And I like the blog because of this. I think it helps show people just how diverse, and intellectually rich the libertarian part of the conversation is (or can be). It helps me stay on my toes. And I wanted to contribute to that. So here I am.

Perhaps that was a mistake. I am open to persuasion. I made pretty radical changes to my life after becoming convinced of my thesis of non-activism. I no longer follow the political news, I have tried to distance myself from any sympathies I might have had for parties, movements or politicians (that one was easy), and so on. I highly recommend it. But maybe I didn’t go quite far enough. If someone can convince me, I’ll leave. Take your best shot.

22 Jul 21:41

D.C. Man Exonerated After Hair Analysis Review

Four months after a Washington, D.C. man was cleared by DNA when the hair analysis used to convict him was found to be wrong, his conviction was vacated Monday. Kevin Martin's exoneration comes nearly one year after the Innocence Project and the National Association of Criminal Defense Lawyers (NACDL) announced its partnership with the U.S. Federal Bureau of Investigations (FBI) and the U.S. Department of Justice to review microscopic hair analysis cases.
Martin was convicted of the 1982 rape and murder of Ursula Brown based largely on the claim that his hair was found at the scene of the crime. He spent more than 26 years behind bars before he was paroled in 2009 and settled in San Francisco.
The Washington Post reported that after DNA testing pointed to Martin's innocence earlier this year, U.S. Attorney Ronald C. Machen Jr. joined defense calls to overturn his conviction. Martin was joined in court by family when a Superior Court judge finally said the words he had been waiting to hear for nearly three decades.
"I am free at last. I am humbled. I never gave up," Martin said, hugging and high-fiving his attorneys. Martin's younger sister, his fiancee, his 6-year-old niece and other family members gathered around.
"I just want to live," said Martin, 50.
Brown's partially clothed body was discovered between a school yard and an apartment building in southwest D.C. She had been shot in the head, slashed and raped. Some of her belongings were found near the scene. A pair of sneakers, which the prosecutor said belonged to the victim, was also found. Those sneakers became key to the case; at trial, the prosecution said that the FBI found one of Martin's pubic hairs on one of the shoes. Facing multiple life sentences if the case went to trial, Martin entered an Alford plea to manslaughter acknowledging that the prosecution had sufficient evidence to convict him, but he did not admit guilt.
Martin first sought DNA testing in 2001 but was told the evidence from his case had been lost. More than a decade later, boxes from the investigation turned up at a new facility and although the hair was not located, other genetic evidence was recovered for testing. According to prosecutors, the DNA matched William D. Davidson, who is serving a sentence of 65 years to life for multiple offenses including being the lookout during Brown's attack.
Martin's is the fifth case since 2009 in which FBI hair analysis has been found to be wrong. Donald Gates, Kirk Odom, Santae Tribble and Cleveland Wright were also wrongly convicted based on false FBI hair analysis.
The Mid-Atlantic Innocence Project assisted in Martin's case.
Read the full article.

22 Jul 22:00

The War on Work

by Robert

21 Jul 20:34

Fingerprinting Computers By Making Them Draw Images

by schneier

Here's a new way to identify individual computers over the Internet. The page instructs the browser to draw an image. Because each computer draws the image slightly differently, this can be used to uniquely identify each computer. This is a big deal, because there's no way to block this right now.

Article. Hacker News thread.

EDITED TO ADD (7/22): This technique was first described in 2012. And it seems that NoScript blocks this. Privacy Badger probably blocks it, too.

EDITED TO ADD (7/23): EFF has a good post on who is using this tracking system -- the White House is -- and how to defend against it.

And a good story on BoingBoing.

22 Jul 03:56

Scientists Recycle The Same Climate Crap, Generation After Generation

by stevengoddard
13 Feb 1941 – Impending Climatic Change. These people have no idea what they are talking about. They just want funding and attention.
22 Jul 14:01

"When the President does it, that means that it is not illegal"

by Kate

They're just mocking you now;

IRS Deputy Associate Chief Counsel Thomas Kane said in transcribed congressional testimony that more IRS officials experienced computer crashes, bringing the total number of crash victims to "less than 20," and also said that the agency does not know if the lost emails are still backed up somewhere.

The new round of computer crash victims includes David Fish, who routinely corresponded with Lois Lerner, as well as Lerner subordinate Andy Megosh, Lerner's technical adviser Justin Lowe, and Cincinnati-based agent Kimberly Kitchens.

22 Jul 07:59

James Madison: Veto Message on the Internal Improvements Bill

by Tenth Amendment

March 3, 1817: As his last official act as President, Madison vetoes a bill that would provide federal funding for building roads and canals throughout the United States. The President finds no expressed congressional power to fund roads and canals in the Constitution, and he believes that the federal government should not encroach upon matters delegated to state governments.

To the House of Representatives of the United States:

Having considered the bill this day presented to me entitled “An act to set apart and pledge certain funds for internal improvements,” and which sets apart and pledges funds “for constructing roads and canals, and improving the navigation of water courses, in order to facilitate, promote, and give security to internal commerce among the several States, and to render more easy and less expensive the means and provisions for the common defense,” I am constrained by the insuperable difficulty I feel in reconciling the bill with the Constitution of the United States to return it with that objection to the House of Representatives, in which it originated.

The legislative powers vested in Congress are specified and enumerated in the eighth section of the first article of the Constitution, and it does not appear that the power proposed to be exercised by the bill is among the enumerated powers, or that it falls by any just interpretation within the power to make laws necessary and proper for carrying into execution those or other powers vested by the Constitution in the Government of the United States.

“The power to regulate commerce among the several States” can not include a power to construct roads and canals, and to improve the navigation of water courses in order to facilitate, promote, and secure such a commerce without a latitude of construction departing from the ordinary import of the terms strengthened by the known inconveniences which doubtless led to the grant of this remedial power to Congress.

To refer the power in question to the clause “to provide for the common defense and general welfare” would be contrary to the established and consistent rules of interpretation, as rendering the special and careful enumeration of powers which follow the clause nugatory and improper. Such a view of the Constitution would have the effect of giving to Congress a general power of legislation instead of the defined and limited one hitherto understood to belong to them, the terms “common defense and general welfare” embracing every object and act within the purview of a legislative trust. It would have the effect of subjecting both the Constitution and laws of the several States in all cases not specifically exempted to be superseded by laws of Congress, it being expressly declared “that the Constitution of the United States and laws made in pursuance thereof shall be the supreme law of the land, and the judges of every State shall be bound thereby, anything in the constitution or laws of any State to the contrary notwithstanding.” Such a view of the Constitution, finally, would have the effect of excluding the judicial authority of the United States from its participation in guarding the boundary between the legislative powers of the General and the State Governments, inasmuch as questions relating to the general welfare, being questions of policy and expediency, are unsusceptible of judicial cognizance and decision.

A restriction of the power “to provide for the common defense and general welfare” to cases which are to be provided for by the expenditure of money would still leave within the legislative power of Congress all the great and most important measures of Government, money being the ordinary and necessary means of carrying them into execution.

If a general power to construct roads and canals, and to improve the navigation of water courses, with the train of powers incident thereto, be not possessed by Congress, the assent of the States in the mode provided in the bill can not confer the power. The only cases in which the consent and cession of particular States can extend the power of Congress are those specified and provided for in the Constitution.

I am not unaware of the great importance of roads and canals and the improved navigation of water courses, and that a power in the National Legislature to provide for them might be exercised with signal advantage to the general prosperity. But seeing that such a power is not expressly given by the Constitution, and believing that it can not be deduced from any part of it without an inadmissible latitude of construction and a reliance on insufficient precedents; believing also that the permanent success of the Constitution depends on a definite partition of powers between the General and the State Governments, and that no adequate landmarks would be left by the constructive extension of the powers of Congress as proposed in the bill, I have no option but to withhold my signature from it, and to cherishing the hope that its beneficial objects may be attained by a resort for the necessary powers to the same wisdom and virtue in the nation which established the Constitution in its actual form and providently marked out in the instrument itself a safe and practicable mode of improving it as experience might suggest.


21 Jul 21:00

More Evidence Uber Keeps People From Drunk Driving

by Paul Best

a graph depicting Uber usageEver since innovative ride-sharing services like Uber and Lyft started gaining popularity, people have made the intuitive assertion that these services could cut down on drinking and driving. People will choose an affordable, safe alternative to drunk driving if that alternative is readily available. 

Just a few weeks ago, Pittsburgh resident Nate Good published a quick study that offered the first hard evidence that DUI rates may be decreasing in cities where Uber is popular. An analysis of Philadelphia's data showed an 11.1 percent decrease in the rate of DUIs since ridesharing services were made available, and an even more astonishing 18.5 percent decrease for people under 30. 

As everyone knows, however, correlation does not equal causation. Good's quick number-crunching was too simplistic to draw any overarching conclusions, but it did open the door for future studies. A recent, deeper analysis from Uber makes the case even stronger that ridesharing services may be responsible for a decline in DUIs.

The first thing Uber did was use its own data to see if people disproportionately called for Uber cars from bars in comparison to other venues. And indeed:

Requests for rides come from Uber users at bars at a much higher rate than you might expect based on the number of bars there are in the city. The fraction of requests from users at bars are between three and five times greater than the total share of bars.

Next, they used government data to find out when deaths from DUIs are most likely to occur. Fatalities due to drunk driving start to peak at midnight, are the highest from 12:00-3:00 AM, and happen much more often on the weekends. Uber then gathered their own internal data and found that Uber transactions spiked at the times when people are most likely to drink and drive (as depicted in the chart above).

There remains plenty of room for more studies on how Uber is affecting transportation trends. But early evidence for a positive impact—an impact that goes far beyond mere consumer convenience—is already compelling.

20 Jul 15:56

Quick Summary Of NCDC Data Tampering Forensics

by stevengoddard
It may not be obvious to everyone yet, but this morning’s TOBS discovery is huge. I need to run now, but here is a quick summary of things I can prove so far about the US temperature record. Until 1999 … Continue reading →
18 Jul 15:20

Chicago Officials Pretend to Be Puzzled at Traffic Cameras Sending Out Undeserved Tickets

by Scott Shackford

h/t Jts5665

Using red lights to fight red inkChicago Tribune reporters David Kidwell and Alex Richards have put together a massive investigation documenting huge problems causing the city’s red light cameras to send out thousands of tickets to innocent drivers. Today they report that after a bunch of cameras stopped giving out any tickets for a couple of days (suggesting possible downtime and perhaps some sort of fiddling), they suddenly went berserk, giving out dozens of tickets a day:

Cameras that for years generated just a few tickets daily suddenly caught dozens of drivers a day. One camera near the United Center rocketed from generating one ticket per day to 56 per day for a two-week period last summer before mysteriously dropping back to normal.

Tickets for so-called rolling right turns on red shot up during some of the most dramatic spikes, suggesting an unannounced change in enforcement. One North Side camera generated only a dozen tickets for rolling rights out of 100 total tickets in the entire second half of 2011. Then, over a 12-day spike, it spewed 563 tickets — 560 of them for rolling rights.

Many of the spikes were marked by periods immediately before or after when no tickets were issued — downtimes suggesting human intervention that should have been documented. City officials said they cannot explain the absence of such records.

City officials seem to be unable to explain anything at all, even as traffic courts buck typical behavior and have reversed nearly half the tickets appealed from one such spike. Transportation officials claim they didn’t even know it was happening until the reporters told them.

Oh, also of note: The company (which is supposed to inform the city of any such spikes) and the city’s program are under federal investigation for corruption. The company, Redflex Traffic Systems Inc., is accused of bribing a city official to the tune of millions in order to land the contract. The Chicago Tribune reported last year how the controversy caused Mayor Rahm Emanuel to disqualify Redflex from a new contract putting up speed cameras near schools and parks to increase revenue safety.

The Tribune notes that these traffic cameras have generated nearly $500 million in revenue since the program began in 2003, yet everybody in the lengthy story seems to dance around the idea that the city or Redflex could have any sort of incentive to make alterations to cause the system to suddenly start spitting out tickets. Chicago’s CBS affiliate noted last fall that the city’s budget for 2014 relied on revenue from its red light cameras (and the highest cigarette taxes in the nation) for revenue in order to balance.

(Hat tip to John Tillman of the Illinois Policy Institute)

19 Jul 09:25

The World Is Being Run By Crazy People

by Kate

Actually, that's not quite accurate.

I think the world is being run by aliens. Because only someone who has never been a child could think like this.


18 Jul 19:08

Intellectual and Practical Foolishness: The Precautionary Principle

by Roy W. Spencer, Ph. D.

Famous supporter of the Precautionary Principle.

Famous supporter of the Precautionary Principle.

The Precautionary Principle (PP) underlies a wide variety of policy efforts around the world today, including energy policy and the debate over the continued use of fossil fuels and the risk they pose regarding climate change. In the European Union, it is even required to be followed in some matters of statutory law.

According to Wikipedia, the Precautionary Principle states:

“if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those taking an action.”

Now, the foolishness of the PP is that it addresses the potential risks of a particular action without addressing the benefits.

This is just plain silliness, and a prescription for human suffering and death. Every modern advance, invention, or convenience you can think of has risks, and those risks must be weighed against their benefits.

There is no such thing as a no-risk human activity.

People even die from choking on food. Maybe we should outlaw food.

In the early years of the environmental movement, bad science combined with PP idealism led to restrictions on the use of DDT to control mosquitoes, which then led to at least tens of millions of needless deaths.

On the global warming front, the PP came up (at least implicitly) in the recent New York Times article profiling John Christy. While that article at least allowed Dr. Christy to state his position on climate change (kudos to NYT for that), Kerry Emanuel in that article likened the risk of not addressing climate change to telling a young girl to run across a busy street to catch her bus. The result could be deadly.

Now, I should be clear that John is OK with that article…it turned out better than he expected it would (we have a long history of being burned by the mainstream media).

But I’m not going to let misguided policy advice from scientists to go unchallenged.

This Precautionary Principle idea that guides people like Kerry Emanuel is nonsense. (I like Kerry, BTW, and he is a top-notch atmospheric researcher). Would we abandon our most abundant and affordable energy sources, required for nearly everything we do, on the chance that there might be some non-zero negative consequence to adding 1 or 2 additional CO2 molecules to each 10,000 molecules of air?

And what about the benefits of more atmospheric CO2? Fewer temperature-related deaths, global greening, increased crop productivity, etc.? We should not accept the premise that more CO2 in the atmosphere is necessarily bad for life on Earth.

The people who advocate the PP are also the ones who have benefited from the advances modern science and engineering have provided us. And all of those advances carry risks.

I suspect the 1+ billion people still without electricity, or who are still using wood and dung for heating and cooking, would see things differently, too.

I realize I have mentioned the PP before, but the NYT article got me thinking about just how pervasive (and non-critical) a mindset this is becoming.

People like me are often asked the question, “But what if you are wrong?” regarding our skepticism that human-caused climate change is going to be something that requires a policy response.

Well, what if they are wrong? I have often said that human caused climate change presents theoretical risks, whereas restricting access to abundant and affordable energy causes real poverty and real deaths.

If they really want to follow the Precautionary Principle, then they should follow their own prescription, which I am rephrasing from the Wikipedia definition of the PP:

If reducing fossil fuel use has a suspected risk of causing harm to the public, in the absence of economic consensus that the reduction is not harmful, the burden of proof that it is not harmful falls on those advocating such a reduction.

18 Jul 01:57

Another Nail In The TOBS Coffin

by stevengoddard
Someone was suggesting earlier that the graph below showing NCDC TOBS adjustments are unwanted, might just be a coincidence. So I did a correlation between the two data sets, and found it is excellent. The important thing to remember is these two … Continue reading →
17 Jul 16:40

I Saw a Man Get Arrested For a Sex Crime Because He Made a Scheduling Error

by Lenore Skenazy

HandcuffsWhen I agreed to keynote the Reform Sex Offender Laws conference this week in Dallas, Texas, I didn't expect it to hit quite so close to home. 

But before I arrived, I got a phone call from a soft-spoken, super-articulate young man, Joshua Gravens, who is a Soros Justice Scholar based in Dallas. His specialty is the injustice of the sex offender registry and the fact that it isn't making kids any safer (see this study and this article). He was also on the public sex offender list until recently and still has restrictions on his movement.* He invited me to come with him to the police department to give notice he had moved. Who could resist?

Josh became a sex offender at age 12. That's when he touched his sister's vagina, twice. His sister told their mom, Josh said it was true (he was too embarrassed at the time to mention that he himself had been raped as a young boy by three local high school kids), and their mom called a counseling service for advice. The counsellor said Josh's mother was required to report his crime to the authorities and the next day, he was arrested.

He spent the next four years in juvenile prison: the Texas Youth Commission, as it is officially called.

The charge was “aggravated sexual assault,” because any sex offense against a person under age 14 is automatically “aggravated.” He got out at age 16 and was put on the sex offender registry, which, in Dallas, requires him to report in person to the authorities once a year, as well as anytime anything in his life changes.

Today he is 27, married with children, and smiley. We met up, had a jolly breakfast (except for the fact he said he felt too pudgy to start a speaking tour), and then we went off to the registry, because his family had just moved to a new house and he had to let the state know no more than seven days after the move.

Just as the detective in the nondescript office finished typing this information into the system and Josh and I were about to go to lunch, a man with a beard and a badge strode up and said, “Joshua Gravens?”


“You are under arrest for not alerting the authorities to your new address.” He whipped out handcuffs. “Put your hands behind your back.”


As the man tightened the cuffs, Josh calmly explained he was registering his new address that very minute.

“The law says you you have to register the fact you are going to move seven days before the move, too.”

“I think you're mistaken,” said Josh, as pleasantly as if discussing the weather.

“I was told to arrest you,” was the reply, and that was that. Josh handed me his car keys and followed the man out to his van along with a handcuffed woman who was crying. She was going to jail for having listed her address as a hotel when she actually lives in her car in front of the hotel.

(This statute suggests that the officer was correct: Registrants must report their intention to change addresses seven days before actually moving, according to the statute.)

After trying to reach Josh's contacts, I hurried over to the sex offender conference to ask: What would happen to Josh now?

“I might be mistaken,” said Jon Cordeiro, a sex offender registrant and director of a Fort Worth re-entry program for offenders, “but technically he has broken the law and failure to comply with the registry laws is considered a new sexual offense.”

sexual offense?

Yes. Any registering snafu is considered a sex crime, and depending on the judge, it can be punished as harshly as the original offense. In other words: Josh, at 27, will be treated as if he just touched an 8 year old's vagina again.

“Typically, there's a mandatory minimum of two to five years,” said Cordeiro.

“In Arkansas, he'd be looking at six,” said another attendee.

Now, maybe Josh will get a great lawyer. Maybe he'll get a lenient judge, or compassionate prosecutor. Or maybe he'll spend half the next decade in prison, charged as a sexual predator for showing up 13 days late with his moving plans.

It's time to reform sex offender laws.

*The original version of this story referred to Gravens as a registered sex offender. He was actually removed from the public list in 2012 after his case drew media attention. But he remains on the list kept by police and his movement remains restricted due in part to two previous failures to register which still appear as felonies on his record, both of which he blames on being incorrectly informed about reporting requirements. 

16 Jul 13:39

On the Sex Offender List at 20 for Sex with an 18 Year Old…Or So He Thought

by lskenazy

The Sex Offender List was created and then made public with the goal of keeping kids safe. Not only has it had no effect on child safety whatsoever (see this study, and this article), it has bloated to include all sorts of “offenders” who do not and never did pose a threat to children. Here’s a letter I just got from the wife of one registrant:

My name is Carrie.  My husband is on the registry because at age 20 he hooked up with a co-worker that he believed to be 18. After he broke it off with her, her sister called and told him to marry the girl or go to jail. Turns out she was really 15 and an illegal immigrant. They wanted her to be able to gain citizenship by marriage and tried blackmailing in order to do it.

He did not marry her and ended up getting 5 years deferred probation. We now have three children together. I really don’t know how this is going to affect them, because at this point he has to register for life.

I asked Carrie if  I could use her name and story as one illustration of how flawed the sex offender registry is.

I would love for you to use our story. The first time it really affected us I was 9 months pregnant with our first child. I found out the house that we put money on we were not allowed to live in two weeks before closing because of Sex Offender residency restrictions. My husband was done with his probation so I’d had no idea there were restrictions on him. Then I found out most cities have residency restrictions in Texas. It was nearly impossible for us to find a place to live. We did find a house and were able to move in six days before I had our daughter. There are other issues that affect us, such as employment for him. He is not currently working.

Do you feel any safer with Carrie’s husband on the registry? Or do you, like me, have the scary feeling that someone you know and love could end up on the sex offender registry for a consensual relationship that he had no idea was a crime? By the way, in another email, Carrie mentioned that her husband has now become a helicopter parent. “He had to attend a therapy group for 5 years and was constantly hearing about bad things happening to children. I am pretty protective too but my husband wants our kids with us all the time.  Everything they put him through really screwed him up.” 

The sex offender laws need reform. That’s why I am keynoting the RSOL (Reform Sex Offender Laws) Conference in Dallas on Thursday. – L 

The Sex Offender Laws need reforming.

Time for this to change.

16 Jul 18:40

07/16/14 PHD comic: 'Writing Progress'

Piled Higher & Deeper by Jorge Cham
Click on the title below to read the comic
title: "Writing Progress" - originally published 7/16/2014

For the latest news in PHD Comics, CLICK HERE!

17 Jul 15:00

The Law of Unintended Carbon Tax Consequences

by Guest Blogger
Coal generator admits its profits will fall without a carbon tax Guest essay by Phillip Hutchings Within minutes of the Australian parliament voting to scrap our carbon tax today, one of our major coal-fired electricity generators issued a profit warning … Continue reading →
16 Jul 23:38

Why scientists should talk to philosophers

by curryja
by Judith Curry The divorce between philosophers and scientists is fairly recent.  Its time for a reconciliation. [P]hilosophers have not kept up with science and their art is dead. – Stephen Hawking Philosophy is a field that, unfortunately, reminds me … Continue reading →
16 Jul 17:09

Google Drops Its ‘Real Name’ Policy, but Has Plans to Stomp Out ‘Wretched Hive of Scum’ Internet Trolls

by Elizabeth Kreft

Calling its own policy “unclear” and “difficult,” Google opted Tuesday to kill off its restriction on pseudonyms for Google+ and YouTube accounts.


Google changed its 3-year-old policy that restricted Google+ users to real names. Now, current and new account holders can use pseudonyms to hide their identities. (AP)

“Over the years … we steadily opened up this policy … Today, we are taking the last step: there are no more restrictions on what name you can use,” Yonatan Zunger, chief architect at Google+, said in a blog post.

Privacy advocates had petitioned the company for years, insisting the restrictions on false names especially harmed “marginalized” society members, such as rape survivors or victims of stalking, who want to remain anonymous but still yearn to get a message out to hungry audiences who could share in their life lessons.

Google wasn’t the only social platform under scrutiny for its naming policy; privacy groups also reeled when former Facebook marketing lead — and the founder’s sister — Randi Zuckerberg said in 2011 that she would like to get rid of Internet anonymity altogether.

“I think anonymity on the Internet has to go away. People behave a lot better when they have their real names down,” Randi Zuckerberg said.”I think people hide behind anonymity and they feel like they can say whatever they want behind closed doors.”

The name change policy didn’t make all Google+ users happy; several commenters jumped on Zunger’s post to declare their fondness for accountability.

“I loved the real names enforcement … So sad to see it go, let’s hope it will not look like the Facebook mess,” Lauren Dinclaux said. And user Chris Chase didn’t hold back: “I’m not sure that I care for this change. Does this mean that YouTube comments will go back to being a steaming pile of monkey s**t? Anonymity has its place on the internet. I’m just not sure that G+ is that place.”

But Google seems confident the policy change won’t mean Internet trolls will roam free.

“Oh, don’t worry. One of the reasons this is safe to launch is that our troll-smashing department has gotten very good at their jobs,” Zunger said in response to the commenters. ”I spent two years working closely with the YouTube team on comments, and I think we have a much better understanding of what turned them into the wretched hive of scum and villainy we all know.”

Zunger indicated the change will likely affect the way comments are rated and shown, which perhaps opens the company up to a freedom of speech and censorship discussion.

“Our troll-smashing department has gotten very good at their jobs…”

“It had to do with more subtle aspects of the interface there: things like “top comments” rewarding people for getting the most interaction, rather than the most positive interaction,” Zunger said. “We’ve changed all of those broken behaviors that we could find and are definitely not changing those back.”

What do you think? Do you prefer to have the option to be anonymous while using online forums, or do you prefer that real names be used to enforce accountability?

(H/T: Techdirt)
Follow Elizabeth Kreft (@elizabethakreft) on Twitter

Read more stories from TheBlaze

In Translated Monologue, Glenn Beck Speaks Directly to the Parents Sending Their Children Across the U.S. Border

Harry Reid Just Made a Claim About the Southern Border That Had Charles Krauthammer Asking if He’s ‘On His Medication or Not’

Ted Nugent Revealed a ‘Staggering’ ‘Hidden Secret’ to Glenn Beck This Morning That He Says The Media Has Been Ignoring

This Guy’s Public Prayer Invokes the Founding Fathers and the Declaration of Independence — but It Isn’t What You’d Expect

See Why Rolling Stone’s List of ‘Most Dangerous Guns’ Is Being Called ‘Maybe the Worst Piece of Journalism of All-Time’

16 Jul 01:02

It Isn’t About Science

by stevengoddard
When confronted with the major issues I have presented, the Berkeley Earth, NASA and NOAA people demonstrate that their only interest is producing graphs showing warming while maintaining some appearance of plausible deniability. Actual scientists would want to understand why their data … Continue reading →
15 Jul 16:22

Why does Outlook map Ctrl+F to forward instead of find?

by (Thom Holwerda)
It's a widespread convention that the Ctrl+F keyboard shortcut initiates a Find operation. Word does it, Excel does it, Wordpad does it, Notepad does it, Internet Explorer does it. But Outlook doesn't. Why doesn't Outlook get with the program? Before clicking the link to go to the full story, try to guess the answer. I'm pretty sure you're going to be wrong.