Shared posts

22 Feb 00:22

Theory Could SMASH Five Physics Mysteries

by Cathal O'Connell, Cosmos
Cathal O'Connell, Cosmos

It's five theories for the price of one. One of the most ambitious physics theories in recent times claims to have solved five of the biggest head-scratchers in particle physics each of them likely worthy of a Nobel prize in their own right.Crazy as the idea sounds, the paper describing it has managed to get past peer-review at Physical Review Letters a prestigious journal that has catalogued many of the most groundbreaking moments in physics history, such as the discovery of gravitational waves last year.
21 Feb 23:43

Arctic Screaming To Scientists : “Shut Up”

by tonyheller

In 2008, Lewis Pugh tried to kayak to the North Pole – based on the predictions of government fake scientists at NSIDC in Boulder, CO.

BBC NEWS | UK | Swimmer aims to kayak to N Pole

North Pole could be ice free in 2008 | New Scientist

Mark Serreze said the Arctic was screaming.

Scientists: ‘Arctic Is Screaming,’ Global Warming May Have Passed Tipping Point | Fox News

Pugh only made it a few miles out of Svalbard, where ice extent was above normal that summer.

Arctic sea ice extent is the same as it was eleven years ago.  The Arctic is indeed screaming at climate scientists – to shut up.

Charctic Interactive Sea Ice Graph | Arctic Sea Ice News and Analysis

14 Feb 22:43

The New Maunder Minimum? Vegetable Shortages Strike London

by Eric Worrall
Guest essay by Eric Worrall The Sun reports that in London, some supermarkets are rationing purchases of vegetables like lettuce, which is in short supply due to Southern European crop failures. SALAD SHORTAGE What is the 2017 vegetable shortage, which supermarkets are rationing broccoli and lettuce and what’s the cause of the crisis? Tesco and…
14 Feb 22:31

Why the Oroville Dam Won’t Fail

by Roy W. Spencer, Ph. D.

While it is said, “never say never”, after researching this issue I’m pretty convinced that it would be nearly impossible for the Oroville Dam to fail.

Even though it is an earthfill embankment dam, which can be destroyed if the dam is topped, the following Metabunk graphic demonstrates why the Oroville design is virtually foolproof:

The emergency spillway (which is now in use) drains excess water along its 1,700 ft length when the lake level exceeds 901 ft. At this writing the lake level is 902.5 ft, which is 1.5 ft. above the lip of the spillway.

The water level would have to rise another 18.5 feet (!) in order to reach the top of the dam itself, which would never happen because the emergency spillway flow (which occurs over a natural ridge made of bedrock) would handle the excess flow long before the lake level ever reached that point.

Now, is there any scenario in which this might happen? I’m not a hydrologist, so I can’t answer that. But if there was a sudden warm spell in the next few weeks with say, 10-20 inches of rain over the watershed melting most of the mountain snowpack, adding tremendously to the inflow into the lake, I’m sure we would see a much greater flow over the emergency spillway. But I suspect it would never reach the top of the dam itself. Nevertheless, there would be a massive flooding event downstream in the Feather and Sacramento Rivers.

14 Feb 22:24

Patrick Henry on What Makes a Country “Great.”

by Michael Boldin

How to make America “great” – Or any country for that matter?

In a June 5, 1788 speech during the Virginia Ratifying Convention, Patrick Henry put it this way:

That country is become a great, mighty, and splendid nation; not because their government is strong and energetic, but, sir, because liberty is its direct end and foundation.

Liberty.

patrick-henry-liberty-america-great

14 Feb 22:10

Economies in reverse

by John H. Cochrane
Remlaps

h/t Whig Zhou

How can economies forget? How is it that once we have learned to do something better, that knowledge can be lost and economies move backward? How can productivity decline? Viewing productivity as knowledge, it would seem almost impossible for it to do so -- and real business cycle theory was often derided on that point. Yet middle ages eurpoeans lost the recipe for concrete, and time after time we have seen economies get worse. How can our own productivity be growing so slowly overall when so much we see around us is progressing so fast?

Scott Alexander at Slate Star Codex has an intriguing blog post that illuminates these questions (HT marginal revolution). I'll offer my thoughts on the answers at the end.

Scott starts with education:

Inputs triple, output unchanged. Productivity dropped to a third of its previous level.


Scott offers remarkable economic clarity on the issue:
"Which would you prefer? Sending your child to a 2016 school? Or sending your child to a 1975 school, and getting a check for $5,000 every year?

I’m proposing that choice because as far as I can tell that is the stakes here. 2016 schools have whatever tiny test score advantage they have over 1975 schools, and cost $5000/year more, inflation adjusted. That $5000 comes out of the pocket of somebody – either taxpayers, or other people who could be helped by government programs.
...College is even worse. Inflation-adjusted cost of a university education was something like $2000/year in 1980. Now it’s closer to $20,000/year.... Do you think that modern colleges provide $18,000/year greater value than colleges did in your parents’ day? Would you rather graduate from a modern college, or graduate from a college more like the one your parents went to, plus get a check for $72,000?  (or, more realistically, have $72,000 less in student loans to pay off)"
Health care is similarly bloated, though a more complex case.
The cost of health care has about quintupled since 1970. ... The average 1960 worker spent ten days’ worth of their yearly paycheck on health insurance; the average modern worker spends sixty days’ worth of it, a sixth of their entire earnings. 
Unlike schooling, health care is unquestionably better now. Scott notices that lifespan doesn't go up as much as we might have hoped, and other countries get the same lifespan with much less cost. Tell that to someone with an advanced cancer, curable with modern drugs and not with 1970 drugs. Still, it's a good example to keep in mind, as it's pretty clear health care is delivering a technologically more advanced product with a huge decrease in organizational efficiency.

Infrastructure, today's cause célèbre is more telling,
"The first New York City subway opened around 1900. ...That looks like it’s about the inflation-adjusted equivalent of $100 million/kilometer today... In contrast, Vox notes [JC: This is an excellent article worth a blog post on its own] that a new New York subway line being opened this year costs about $2.2 billion per kilometer, suggesting a cost increase of twenty times – although I’m very uncertain about this estimate.
...The same Vox article notes that Paris, Berlin, and Copenhagen subways cost about $250 million per kilometer, almost 90% less. Yet even those European subways are overpriced compared to Korea, where a kilometer of subway in Seoul costs $40 million/km (another Korean subway project cost $80 million/km). This is a difference of 50x between Seoul and New York for apparently comparable services. It suggests that the 1900s New York estimate above may have been roughly accurate if their efficiency was roughly in line with that of modern Europe and Korea."
I have seen similar numbers for high speed trains -- ours cost multiples of France's, let alone China's.

I find this one particularly telling, because we're building 19th century technology, with 21st century tools -- huge boring machines that dramatically cut costs. And other countries still know how to do it for costs orders of magnitude lower than ours.

Similarly, housing. bottom line
"Or, once again, just ask yourself: do you think most poor and middle class people would rather:

1. Rent a modern house/apartment

2. Rent the sort of house/apartment their parents had, for half the cost"
Housing is a little different I think, because much of the cost rise is the value of land, so supply restrictions are clearly at work.

More useful anectdotes, on whether this is real or just a figment of statistics.
The last time I talked about this problem, someone mentioned they’re running a private school which does just as well as public schools but costs only $3000/student/year, a fourth of the usual rate. Marginal Revolution notes that India has a private health system that delivers the same quality of care as its public system for a quarter of the cost. Whenever the same drug is provided by the official US health system and some kind of grey market supplement sort of thing, the grey market supplement costs between a fifth and a tenth as much; for example, Google’s first hit for Deplin®, official prescription L-methylfolate, costs $175 for a month’s supply; unregulated L-methylfolate supplement delivers the same dose for about $30. And this isn’t even mentioning things like the $1 bag of saline that costs $700 at hospitals. 
Where is the money going? It's not, despite what you may think, going to higher salaries:


Scott has similar evidence for college professors, doctors, nurses and so on. What about fancy salaries you hear about?
...colleges are doing everything they can to switch from tenured professors to adjuncts, who complain of being overworked and abused while making about the same amount as a Starbucks barista.
It's also not going to profits, or CEO salaries. Those have not risen by the orders of magnitude necessary to explain the cost disease.
This can’t be pure price-gouging, since corporate profits haven’t increased nearly enough to be where all the money is going.
My thoughts below.

Scott's elegant summary:
So, to summarize: in the past fifty years, education costs have doubled, college costs have dectupled, health insurance costs have dectupled, subway costs have at least dectupled, and housing costs have increased by about fifty percent. US health care costs about four times as much as equivalent health care in other First World countries; US subways cost about eight times as much as equivalent subways in other First World countries.

And this is especially strange because we expect that improving technology and globalization ought to cut costs. In 1983, the first mobile phone cost $4,000 – about $10,000 in today’s dollars. It was also a gigantic piece of crap. Today you can get a much better phone for $100. This is the right and proper way of the universe. It’s why we fund scientists, and pay businesspeople the big bucks.

But things like college and health care have still had their prices dectuple. Patients can now schedule their appointments online; doctors can send prescriptions through the fax, pharmacies can keep track of medication histories on centralized computer systems that interface with the cloud, nurses get automatic reminders when they’re giving two drugs with a potential interaction, insurance companies accept payment through credit cards – and all of this costs ten times as much as it did in the days of punch cards and secretaries who did calculations by hand.

It’s actually even worse than this, because we take so many opportunities to save money that were unavailable in past generations. Underpaid foreign nurses immigrate to America and work for a song. Doctors’ notes are sent to India overnight where they’re transcribed by sweatshop-style labor for pennies an hour. Medical equipment gets manufactured in goodness-only-knows which obscure Third World country. And it still costs ten times as much as when this was all made in the USA – and that back when minimum wages were proportionally higher than today.

And it’s actually even worse than this. A lot of these services have decreased in quality, presumably as an attempt to cut costs even further. Doctors used to make house calls; even when I was young in the ’80s my father would still go to the houses of difficult patients who were too sick to come to his office. This study notes that for women who give birth in the hospital, “the standard length of stay was 8 to 14 days in the 1950s but declined to less than 2 days in the mid-1990s”. The doctors I talk to say this isn’t because modern women are healthier, it’s because they kick them out as soon as it’s safe to free up beds for the next person. Historic records of hospital care generally describe leisurely convalescence periods and making sure somebody felt absolutely well before letting them go; this seems bizarre to anyone who has participated in a modern hospital, where the mantra is to kick people out as soon as they’re “stable” ie not in acute crisis.

If we had to provide the same quality of service as we did in 1960, and without the gains from modern technology and globalization, who even knows how many times more health care would cost? Fifty times more? A hundred times more?
And the same is true for colleges and houses and subways and so on.
Scott points out that many of our intractable political debates -- paying for college, health care, housing, and transportation, are made intractable by this bloat:
 I don’t know why more people don’t just come out and say “LOOK, REALLY OUR MAIN PROBLEM IS THAT ALL THE MOST IMPORTANT THINGS COST TEN TIMES AS MUCH AS THEY USED TO FOR NO REASON, PLUS THEY SEEM TO BE GOING DOWN IN QUALITY, AND NOBODY KNOWS WHY, AND WE’RE MOSTLY JUST DESPERATELY FLAILING AROUND LOOKING FOR SOLUTIONS HERE.” State that clearly, and a lot of political debates take on a different light.
What's happening?

I think Scott's post is exceptionally good because it points out the enormous size of the problem. It's just not salient to point to productivity numbers that grow a few percentage points higher or lower. When you add it up over decades to see that while some things have gotten ten times better, other things are ten times more expensive than they should be really strikes home.

Scott tries on a list of candidate explanations and doesn't really find any. He comes closest with regulation, but correctly points out that formal regulatory requirements, though getting a lot worse, don't add up to the huge size of this cost disease.

So, what is really happening? I think Scott nearly gets there. Things cost 10 times as much, 10 times more than they used to and 10 times more than in other countries. It's not going to wages. It's not going to profits. So where is it going?

The unavoidable answer: The number of people it takes to produce these goods is skyrocketing. Labor productivity -- quality adjusted output per number of people involved in the entire process -- declined by a factor of 10 in these areas. It pretty much has to be that: if the money is not going to profits, to to each employee, it must be going to the number of employees.

How can that happen? Our machines are better than ever, as Scott points out. Well, we (and especially we economists) pay too much attention to snazzy gadgets. Productivity depends on organizations not just on gadgets. Southwest figured out how to turn an airplane around in 20 minutes, and it still takes United an hour.

Contrariwise, I think we know where the extra people are. The ratio of teachers to students hasn't gone down a lot -- but the ratio of administrators to students has shot up. Most large public school systems spend more than half their budget on administrators. Similarly, class sizes at most colleges and universities haven't changed that much -- but administrative staff have exploded. There are 2.5 people handling insurance claims for every doctor. Construction sites have always had a lot of people standing around for every one actually working the machine. But now for every person operating the machine there is an army of planners, regulators, lawyers, administrative staff, consultants and so on. (I welcome pointers to good graphs and numbers on this sort of thing.)

So, my bottom line: administrative bloat.

Well, how does bloat come about? Regulations and law are, as Scott mentions, part of the problem. These are all areas either run by the government or with large government involvement. But the real key is, I think lack of competition. These are above all areas with not much competition. In turn, however, they are not by a long shot "natural monopolies" or failure of some free market. The main effect of our regulatory and legal system is not so much to directly raise costs, as it is to lessen competition (that is often its purpose). The lack of competition leads to the cost disease.

Though textbooks teach that monopoly leads to profits, it doesn't "The best of all monopoly profits is a quiet life" said Hicks.  Everywhere we see businesses protected from competition, especially highly regulated businesses, we see the cost disease spreading. And it spreads largely by forcing companies to hire loads of useless people.

Yes, technical regress can happen. Productivity depends as much on the functioning of large organizations, and the overall legal and regulatory system in which they operate, as it does on gadgets. We can indeed "forget" how those work. Like our ancestors peer at the buildings, aqueducts, dams, roads, and bridges put up by our ancestors, whether Roman or American, and wonder just how they did it.



12 Feb 00:46

PLOS Computational Biology: Ten Simple Rules to Enable Multi-site Collaborations through Data Sharing

TL;DR Figure 1 from from the article illustrates the 10 rules. You can scroll down for some explanatory text or better, [click through](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005278) to the primary article: --- [![journal.pcbi.1005278.g001.png](https://s28.postimg.org/fxjg591m5/journal_pcbi_1005278_g001.png)](https://postimg.org/image/6pr7ojujt/) *\[Image Source: [PLOS Computational Biology](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005278), License: Creative Commons Attribution\]* --- ### About the Authors When I'm reading an article, I like to know a little bit about the author(s), so I did some quick searches and found these web sites: - [Mary Regina Boland](https://systemsbiology.columbia.edu/people/mary-boland) is a graduate student at Columbia University. She studies genetic interactions at the system level in patients who take multiple gene-targeting medications, and focuses on identifying interactions that are responsible for adverse drug incidents. - [Konrad J. Karczewski](https://konradjkarczewski.com/) is a research fellow at Massachusetts General Hospital with a PhD from the Biomedical Informatics training program at Stanford University and is the coauthor of, [Exploring Personal Genomics](https://www.amazon.com/Exploring-Personal-Genomics-Joel-Dudley/dp/0199644497). - [Nicholas P. Tatonetti](http://www.tatonetti.com/cv.html) is Ms. Boland's research advisor and Assistant Professor of Biomedical Informatics at Columbia University. His CV lists numerous publications and awards, and he serves on the editorial board for the [Journal of Biomedical Informatics](https://www.journals.elsevier.com/journal-of-biomedical-informatics/) and is an Honorary Editorial Board Member, [Drug Safety](https://link.springer.com/journal/40264). --- ## Introduction A couple of months ago, I published [Ten Rules For Development of Biological Databases](https://steemit.com/science/@remlaps/ten-rules-for-development-of-biological-databases), which summarized an article of the same title from the Open Access journal, PLOS *Computational Biology*. In this month's issue, an article that caught my interest was, [Ten Simple Rules to Enable Multi-site Collaborations through Data Sharing](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005278). The article did not disappoint, so I thought I'd publish a summary of this one, too. In opening, the authors explain that although the importance of collaboration is widely understood, many barriers still exist. They point out that the barriers are especially pronounced for multi-site collaboration efforts, but they assert that successful multisite collaboration leads to enhanced scientific productivity and fulfillment. Accordingly, the authors provide us with 10 simple rules to enhance data sharing among scientific collaborators at multiple sites. Design focus for these rules included: privacy, platforms, holistic perspective, researcher engagement, and incentives. ### Rule 1: Make Software Open-Source Internal sharing of code and algorithms is sufficient while research is being conducted, but after research is complete it should be made public. This sharing of code and algorithms enables replication and verification activities and it also enables other researchers to build upon and enhance the findings. To assist with this, the authors suggest [Ten Simple Rules for the Open Development of Scientific Software](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002802). With [R](https://www.r-project.org/), they also suggest the use of [CRAN](https://cran.r-project.org/) or [bioconductor](https://www.bioconductor.org/). ### Rule 2: Provide Open-Source Data The authors suggest open data sharing as the ideal, but recognize the reality that it may be unethical or illegal to share some data, so they propose two strategies for data sharing, open repositories or partial sharing. #### Open Repositories The authors state that source data should be made open whenever possible, and they suggest the use of the National Center for Biotechnology Information (NCBI) [Sequence Read Archive (SRA)](https://www.ncbi.nlm.nih.gov/sra) and [Gene Expression Omnibus (GEO)](https://www.ncbi.nlm.nih.gov/geo/) for that purpose. They also suggest that sharing intermediate data files can also be helpful. After publication, in addition to SRA and GEO, the authors also suggest that [ClinVar](https://www.ncbi.nlm.nih.gov/clinvar/) can also be used to deposit the data. The authors assert that openness benefits scientific collaborators by enabling ease of analysis. **Aside:** It's noteworthy that these researchers don't even seem to have the block chain in their field of view. I have often thought that something like the steem block chain could provide an excellent home for scientific data, and it could incentivize data sharing. #### Partial Sharing The authors refer to this as, "Middle-Ground Data Sharing." When privacy or legal constraints prevent open sharing of information, platforms exist which provide access to share data only with authorized individuals on field by field level. An example that the authors give is the Shared Health Research Information Network (SHRINE), which provides HIPAA compliant access to shared data, and the Austrailian [BioGrid](https://www.biogrid.org.au/). Another "middle-ground" approach is to provide summary statistics. ### Rule 3: Use Multiple Platforms to Share Research Products We have already gained some insight into this rule with the suggestions for sharing platforms in rules 1 and 2. The platforms that are used for open and partial data sharing are not the same, and those differ from the platforms that were suggested for sharing code and algorithms in [R](https://www.r-project.org/). In addition to the above, the authors now suggest code sharing in [Figshare](https://figshare.com/) and [Github](https://github.com/), a platform that will be familiar to many steemizens. In addition to sharing code and data, an integral part of collaboration is communication. For this purpose, suggestions include Google forums, wiki pages, and the [ExAC Browser](http://exac.broadinstitute.org/), which integrates data from 17 independent consortiums. ### Rule 4: Secure Necessary Permissions/Data Use Agreements A Priori This ties back to rule 2, and it strikes me as common sense, but it does need to be said. Sometimes it is illegal or improper to share data. Sometimes, it is even illegal or improper to use it. If you're not allowed to use it, you shouldn't use it. In my college days, it was common for students with professional jobs to use data from their job sites in their school projects. In my more recent grad-school experiences, this is no longer considered to be acceptable behavior. ### Rule 5: Know the Privacy Rules for Your Data Again, this ties back to rule 2 and 4. In rule 4, a researcher only uses data that is authorized for use. In rule 5, the researcher only discloses information that is authorized for disclosure. Researchers are expected to know and comply with disclosure requirements. It is the privacy requirements in rule 5 that will determine whether a researcher uses open access or partial data sharing with collaborators. ### Rule 6: Facilitate Reproducibility To assist with this endeavor, the authors suggest [Ten Simple Rules for Reproducible Computational Research](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285), which notes that, "Replication is the cornerstone of a cumulative science." In particular, methodologies, results, and definitions should be standardized. Programs like the [eMERGE network](https://academic.oup.com/jamia/article-lookup/doi/10.1136/amiajnl-2012-000896) have been created to facilitate these efforts. ### Rule 7: Think Global Open access can be hindered by a cultural-centric presentation. When collaborating with international researchers, issues like language can have a profound impact on data sharing. Difference that arise when collaborating on an international scale include mechanical differences and conceptual differences. When international standards are available, they should be used. ### Rule 8: Publicize Your Work In [Ten Rules For Development of Biological Databases](https://steemit.com/science/@remlaps/ten-rules-for-development-of-biological-databases), the rule was, "Tell the world." Although the rule is similar, these authors come at it from a different tack. They note that research can be published in journals which do not require novelty, such as *PLOS ONE*, *Scientific Reports*, and *Cell Reports*. Data can be published in *Scientific Data* and *Database*. Further, provided that all source data is open, journals such as [F1000](https://f1000research.com/) are explicitly designed to facilitate open science. Finally, researchers can use blogs to interact with the public at large. When blogging, however, it is important not to oversell the research. ### Rule 9: Stay Realistic, but Aim High Be humble and comply with data usage requirements. Overstating conclusions or unauthorized release of data may lead to retractions, which may harm everyone in the collaborative cell. Within those constraints, however, shoot for the moon. Science advances by pushing boundaries, so don't be afraid to challenge the status quo. ### Rule 10: Be Engaged This ties back to rule 8, and the authors are sounding a bit like Agile programmers here. - Use social media, especially github and figshare - Release early and release often - Engage using non-traditional methods like t-shirts. ## Conclusion As with [Ten Rules For Development of Biological Databases](https://steemit.com/science/@remlaps/ten-rules-for-development-of-biological-databases), this article contains insights that are valuable within its own field, but also insights that are valuable externally. For example, aside from some of the specific platform suggestions, almost all of the rules apply to scientific researchers outside of the computational biology specialty. Personally, I made a note for myself to check out the [ExAC Browser](http://exac.broadinstitute.org/). --- @remlaps is an IT professional with three decades of professional experience in data communications and information systems. He holds a bachelor's degree in mathematics, a master's degree in computer science, and a master's degree in information systems and technology management. He has also been awarded 3 US patents.
11 Feb 20:17

New Uranium-Based Minerals Discovered in Utah

by David Grossman, Pop Mech
David Grossman, Pop Mech
A Notre Dame graduate student recently found three new minerals while exploring old uranium mines in Utah. The three new minerals, leesite, leszilrdite and redcanyonite, are all new compounds of uranium and other components, allowing researchers to study how different forms of uranium can propagate in the natural environment.
11 Feb 20:15

Trump May Suspend Obama-era Rules on Conflict Minerals

by Richard Morrison

The Dodd-Frank Act’s “conflict minerals” rules have backfired, causing malnutrition, misery, and violence in Congo, a large country in the heart of Africa. Their intended purpose was to rein in violent militias in the eastern part of the nation, but they actually helped make militias more violent. Destabilizing a war-torn country that is a source of key minerals is certainly not in America’s national security interest. So it is nice to hear talk that these rules may be suspended.

11 Feb 19:20

Explaining Trump

by David Friedman
Remlaps

h/t Whig Zhou

There are two possible approaches to explaining odd things Trump does. One is to assume that he is stupid, crazy, erratic. Early in the campaign that looked like a plausible explanation. After he won twice contests everyone expected him to lose, first the nomination and then the election, it was still possible–he could have been lucky–but less plausible.

The alternative is that he is crazy like a fox, doing odd things for reasonable, perhaps tactically correct, reasons. I am not sure that is what is going on but I think it quite likely and have been trying to make sense of his moves on that basis.

The most recent example, not done directly by Trump, was preventing Elizabeth Warren from speaking against Sessions on the grounds that one senator was not supposed to say hostile things about another. That got a lot of negative publicity and was widely viewed as a blunder.

There is another possibility. The incident raised Warren's visibility and status within the Democratic party. That will tend to pull the party left. Trump may well believe that pulling the Democrats left will make it harder for them to win future elections. He may well be right.

Apply the same approach to making sense of the apparently bungled executive order on immigration. Including green card holders in the initial order made no sense in terms of the stated objective and was a considerable, and highly public, nuisance for those affected. The result was a lot of hostile criticism, greatly increasing the amount of publicity the executive order got. 

Seen from the standpoint of Trump's base, he was doing something about immigration and terrorism, as he had promised and it must be a substantial something if his enemies in the media were so upset about it. As Jack Goldsmith, a former head of the Justice Department’s Office of Legal Counsel and a Harvard Law School professor, put it:
“The president would get a huge symbolic boost with his base while not violating the law and while changing nothing of substance. He would get maximum symbolic value while doing nothing. Trump’s a genius at this.”
11 Feb 18:18

The 'alpha dog' myth is leading countless owners to mistreat their dogs

by Jessica Orwig

Do you think of yourself as the alpha dog of the house? Do you think it's important to assert your dominance over your dog? Well, you're in for a surprise. 

The alpha dog is a myth — alpha dogs don't exist in the wild and they should never exist in your household, either. Dog expert Alexandra Horowitz tells us why. 

Learn more about how your dog thinks and perceives the world in Horowitz's latest book "Being a Dog: Following the Dog Into a World of Smell."

Follow Tech Insider: On Facebook

 

 

Join the conversation about this story »

11 Feb 18:02

Minimum Wage and Discrimination

by Walter Williams

There is little question in most academic research that increases in the minimum wage lead to increases in unemployment. The debatable issue is the magnitude of the increase. An issue not often included in minimum wage debates is the substitution effects of minimum wage increases. The substitution effect might explain why Business for a Fair Minimum Wage, a national network of business owners and executives, argues for higher minimum wages. Let's look at substitution effects in general.

When the price of anything rises, people seek substitutes and measures to economize. When gasoline prices rise, people seek to economize on the usage of gas by buying smaller cars. If the price of sugar rises, people seek cheaper sugar substitutes. If prices of goods in one store rise, people search for other stores. This last example helps explain why some businessmen support higher minimum wages. If they could impose higher labor costs on their less efficient competition, it might help drive them out of business. That would enable firms that survive to charge higher prices and earn greater profits.

There's a more insidious substitution effect of higher minimum wages. You see it by putting yourself in the place of a businessman who has to pay at least the minimum wage to anyone he hires. Say that you are hiring typists. There are some who can type 40 words per minute and others, equal in every other respect, who can type 80 words per minute. Whom would you hire? I'm guessing you'd hire the more highly skilled. Thus, one effect of the minimum wage is discrimination against the employment of lower-skilled workers. In some places, the minimum wage is $15 an hour. But if a lower-skilled worker could offer to work for, say, $8 an hour, you might hire him. In addition to discrimination against lower-skilled workers, the minimum wage denies them the chance of sharpening their skills and ultimately earning higher wages. The most effective form of training for most of us is on-the-job training.

An even more insidious substitution effect of minimum wages can be seen from a few quotations. During South Africa's apartheid era, racist unions, which would never accept a black member, were the major supporters of minimum wages for blacks. In 1925, the South African Economic and Wage Commission said, "The method would be to fix a minimum rate for an occupation or craft so high that no Native would be likely to be employed." Gert Beetge, secretary of the racist Building Workers' Union, complained, "There is no job reservation left in the building industry, and in the circumstances, I support the rate for the job (minimum wage) as the second-best way of protecting our white artisans." "Equal pay for equal work" became the rallying slogan of the South African white labor movement. These laborers knew that if employers were forced to pay black workers the same wages as white workers, there'd be reduced incentive to hire blacks.

South Africans were not alone in their minimum wage conspiracy against blacks. After a bitter 1909 strike by the Brotherhood of Locomotive Firemen and Enginemen in the U.S., an arbitration board decreed that blacks and whites were to be paid equal wages. Union members expressed their delight, saying, "If this course of action is followed by the company and the incentive for employing the Negro thus removed, the strike will not have been in vain."

Our nation's first minimum wage law, the Davis-Bacon Act of 1931, had racist motivation. During its legislative debate, its congressional supporters made such statements as, "That contractor has cheap colored labor that he transports, and he puts them in cabins, and it is labor of that sort that is in competition with white labor throughout the country." During hearings, American Federation of Labor President William Green complained, "Colored labor is being sought to demoralize wage rates."

Today's stated intentions behind the support of minimum wages are nothing like yesteryear's. However, intentions are irrelevant. In the name of decency, we must examine the effects.

11 Feb 18:01

On school choice, wise words from the advocate you would never expect

by Elizabeth English
Remlaps

h/t Whig Zhou

Following last week’s celebration of Catholic Schools Week (January 26 through February 4, 2017) the debate over “school choice” really got heated when every Democratic senator (and two Republicans) voted against confirming school choice advocate Betsy DeVos for Secretary of Education in yesterday’s Senate floor vote (disclosure: DeVos formerly served as a trustee on the AEI board). Vice President Mike Pence broke the 50-50 tie, making DeVos the nation’s eleventh individual to hold the office.

During the heated battle over DeVos’ confirmation, Senator Elizabeth Warren of Massachusetts wrote a public letter to DeVos that was highly critical of her support for charter schools and voucher programs.

On January 9, 2017, Warren wrote:

For decades, you have been one of the nation’s strongest advocates for radically transforming the public education system through the use of taxpayer-funded vouchers that steer public dollars away from traditional public schools to private and religious schools…But the actual evidence on how private voucher programs affect educational outcomes is mixed at best, in many cases reveals these programs to be expensive and dangerous failures that cost taxpayers billions of dollars while destroying public education systems.

But Warren would be wise to re-read the words of a committed advocate for school reform and choice from back in 2003.

who-said-it

That thoughtful policy leader wrote:

An all-voucher or all-school choice system would be a shock to the educational system, but the shake out might be just what the system needs…But over time, the whole concept of “the Beverly Hills schools” or “Newton schools” would die out, replaced in the hierarchy by schools that offer a variety of programs that parents want for their children, regardless of the geographic boundaries. By selecting where to send their children (and where to spend their vouchers), parents would take control over schools’ tax dollars, making them the de facto owners of those schools.

No, those aren’t the words of economist Milton Friedman, who invented the concept of school vouchers. Nor are they the words of now-Education Secretary Betsy DeVos.

That was Elizabeth Warren in her book, “The Two-Income Trap”.

10 Feb 14:40

Climate scientists versus climate data

by curryja

by John Bates

A look behind the curtain at NOAA’s climate data center.

I read with great irony recently that scientists are “frantically copying U.S. Climate data, fearing it might vanish under Trump” (e.g., Washington Post 13 December 2016). As a climate scientist formerly responsible for NOAA’s climate archive, the most critical issue in archival of climate data is actually scientists who are unwilling to formally archive and document their data. I spent the last decade cajoling climate scientists to archive their data and fully document the datasets. I established a climate data records program that was awarded a U.S. Department of Commerce Gold Medal in 2014 for visionary work in the acquisition, production, and preservation of climate data records (CDRs), which accurately describe the Earth’s changing environment.

The most serious example of a climate scientist not archiving or documenting a critical climate dataset was the study of Tom Karl et al. 2015 (hereafter referred to as the Karl study or K15), purporting to show no ‘hiatus’ in global warming in the 2000s (Federal scientists say there never was any global warming “pause”). The study drew criticism from other climate scientists, who disagreed with K15’s conclusion about the ‘hiatus.’ (Making sense of the early-2000s warming slowdown). The paper also drew the attention of the Chairman of the House Science Committee, Representative Lamar Smith, who questioned the timing of the report, which was issued just prior to the Obama Administration’s Clean Power Plan submission to the Paris Climate Conference in 2015.

In the following sections, I provide the details of how Mr. Karl failed to disclose critical information to NOAA, Science Magazine, and Chairman Smith regarding the datasets used in K15. I have extensive documentation that provides independent verification of the story below. I also provide my suggestions for how we might keep such a flagrant manipulation of scientific integrity guidelines and scientific publication standards from happening in the future. Finally, I provide some links to examples of what well documented CDRs look like that readers might contrast and compare with what Mr. Karl has provided.

Background

In 2013, prior to the Karl study, the National Climatic Data Center [NCDC, now the NOAA National Centers for Environmental Information (NCEI)] had just adopted much improved processes for formal review of Climate Data Records, a process I formulated [link]. The land temperature dataset used in the Karl study had never been processed through the station adjustment software before, which led me to believe something was amiss. When I pressed the co-authors, they said they had decided not to archive the dataset, but did not defend the decision. One of the co-authors said there were ‘some decisions [he was] not happy with’. The data used in the K15 paper were only made available through a web site, not in digital form, and lacking proper versioning and any notice that they were research and not operational data. I was dumbstruck that Tom Karl, the NCEI Director in charge of NOAA’s climate data archive, would not follow the policy of his own Agency nor the guidelines in Science magazine for dataset archival and documentation.

I questioned another co-author about why they choose to use a 90% confidence threshold for evaluating the statistical significance of surface temperature trends, instead of the standard for significance of 95% — he also expressed reluctance and did not defend the decision. A NOAA NCEI supervisor remarked how it was eye-opening to watch Karl work the co-authors, mostly subtly but sometimes not, pushing choices to emphasize warming. Gradually, in the months after K15 came out, the evidence kept mounting that Tom Karl constantly had his ‘thumb on the scale’—in the documentation, scientific choices, and release of datasets—in an effort to discredit the notion of a global warming hiatus and rush to time the publication of the paper to influence national and international deliberations on climate policy.

Defining an Operational Climate Data Record

For nearly two decades, I’ve advocated that if climate datasets are to be used in important policy decisions, they must be fully documented, subject to software engineering management and improvement processes, and be discoverable and accessible to the public with rigorous information preservation standards. I was able to implement such policies, with the help of many colleagues, through the NOAA Climate Data Record policies (CDR) [link].

Once the CDR program was funded, beginning in 2007, I was able to put together a team and pursue my goals of operational processing of important climate data records emphasizing the processes required to transition research datasets into operations (known as R2O). Figure 1 summarizes the steps required to accomplish this transition in the key elements of software code, documentation, and data.

slide1Figure 1. Research to operations transition process methodology from Bates et al. 2016.

Unfortunately, the NCDC/NCEI surface temperature processing group was split on whether to adopt this process, with scientist Dr. Thomas C. Peterson (a co-author on K15, now retired from NOAA) vigorously opposing it. Tom Karl never required the surface temperature group to use the rigor of the CDR methodology, although a document was prepared identifying what parts of the surface temperature processing had to be improved to qualify as an operational CDR.

Tom Karl liked the maturity matrix so much, he modified the matrix categories so that he could claim a number of NCEI products were “Examples of “Gold” standard NCEI Products  (Data Set Maturity Matrix Model Level 6).” See his NCEI overview presentation all NCEI employees [ncei-overview-2015nov-2 ] were told to use, even though there had never been any maturity assessment of any of the products.

NCDC/NCEI surface temperature processing and archival

In the fall of 2012, the monthly temperature products issued by NCDC were incorrect for 3 months in a row [link]. As a result, the press releases and datasets had to be withdrawn and reissued. Dr. Mary Kicza, then the NESDIS Associate Administrator (the parent organization of NCDC/NCEI in NOAA), noted that these repeated errors reflected poorly on NOAA and required NCDC/NCEI to improve its software management processes so that such mistakes would be minimized in the future. Over the next several years, NCDC/NCEI had an incident report conducted to trace these errors and recommend corrective actions.

Following those and other recommendations, NCDN/NCEI began to implement new software management and process management procedures, adopting some of the elements of the CDR R2O process. In 2014 a NCDC/NCEI Science Council was formed to review new science activities and to review and approve new science products for operational release. A draft operational readiness review (ORR) was prepared and used for approval of all operational product releases, which was finalized and formally adopted in January 2015. Along with this process, a contractor who had worked at the CMMI Institute (CMMI, Capability Maturity Model Integration, is a software engineering process level improvement training and appraisal program) was hired to improve software processes, with a focus on improvement and code rejuvenation of the surface temperature processing code, in particular the GHCN-M dataset.

The first NCDC/NCEI surface temperature software to be put through this rejuvenation was the pairwise homogeneity adjustment portion of processing for the GHCN-Mv4 beta release of October 2015. The incident report had found that there were unidentified coding errors in the GHCN-M processing that caused unpredictable results and different results every time code was run.

The generic flow of data used in processing of the NCDC/NCEI global temperature product suite is shown schematically in Figure 2. There are three steps to the processing, and two of the three steps are done separately for the ocean versus land data. Step 1 is the compilation of observations either from ocean sources or land stations. Step 2 involves applying various adjustments to the data, including bias adjustments, and provides as output the adjusted and unadjusted data on a standard grid. Step 3 involves application of a spatial analysis technique (empirical orthogonal teleconnections, EOTs) to merge and smooth the ocean and land surface temperature fields and provide these merged fields as anomaly fields for ocean, land and global temperatures. This is the product used in K15. Rigorous ORR for each of these steps in the global temperature processing began at NCDC in early 2014.slide2Figure 2. Generic data flow for NCDC/NCEI surface temperature products.

In K15, the authors describe that the land surface air temperature dataset included the GHCN-M station data and also the new ISTI (Integrated Surface Temperature Initiative) data that was run through the then operational GHCN-M bias correction and gridding program (i.e., Step 2 of land air temperature processing in Figure 2). They further indicated that this processing and subsequent corrections were ‘essentially the same as those used in GHCN-Monthly version 3’. This may have been the case; however, doing so failed to follow the process that had been initiated to ensure the quality and integrity of datasets at NCDC/NCEI.

The GHCN-M V4 beta was put through an ORR in October 2015; the presentation made it clear that any GHCN-M version using the ISTI dataset should, and would, be called version 4. This is confirmed by parsing the file name actually used on the FTP site for the K15 dataset [link]; NOTE: placing a non-machine readable copy of a dataset on an FTP site does not constitute archiving a dataset). One file is named ‘box.12.adj.4.a.1.20150119’, where ‘adj’ indicates adjusted (passed through step 2 of the land processing) and ‘4.a.1’ means version 4 alpha run 1; the entire name indicating GHCN-M version 4a run 1. That is, the folks who did the processing for K15 and saved the file actually used the correct naming and versioning, but K15 did not disclose this. Clearly labeling the dataset would have indicated this was a highly experimental early GHCN-M version 4 run rather than a routine, operational update. As such, according to NOAA scientific integrity guidelines, it would have required a disclaimer not to use the dataset for routine monitoring.

In August 2014, in response to the continuing software problems with GHCNMv3.2.2 (version of August 2013), the NCDC Science Council was briefed about a proposal to subject the GHCNMv3 software, and particularly the pairwise homogeneity analysis portion, to a rigorous software rejuvenation effort to bring it up to CMMI level 2 standards and resolve the lingering software errors. All software has errors and it is not surprising there were some, but the magnitude of the problem was significant and a rigorous process of software improvement like the one proposed was needed. However, this effort was just beginning when the K15 paper was submitted, and so K15 must have used data with some experimental processing that combined aspects of V3 and V4 with known flaws. The GHCNMv3.X used in K15 did not go through any ORR process, and so what precisely was done is not documented. The ORR package for GHCNMv4 beta (in October 2015) uses the rejuvenated software and also includes two additional quality checks versus version 3.

Which version of the GHCN-M software K15 used is further confounded by the fact that GHCNMv3.3.0, the upgrade from version 3.2.2, only went through an ORR in April 2015 (i.e., after the K15 paper was submitted and revised). The GHCN-Mv3.3.0 ORR presentation demonstrated that the GHCN-M version changes between V3.2.2 and V3.3.0 had impacts on rankings of warmest years and trends. The data flow that was operational in June 2015 is shown in figure 3.

slide3Figure 3. Data flow for surface temperature products described in K15 Science paper. Green indicates operational datasets having passed ORR and archived at time of publication. Red indicates experimental datasets never subject to ORR and never archived.

It is clear that the actual nearly-operational release of GHCN-Mv4 beta is significantly different from the version GHCNM3.X used in K15. Since the version GHCNM3.X never went through any ORR, the resulting dataset was also never archived, and it is virtually impossible to replicate the result in K15.

At the time of the publication of the K15, the final step in processing the NOAAGlobalTempV4 had been approved through an ORR, but not in the K15 configuration. It is significant that the current operational version of NOAAGlobalTempV4 uses GHCN-M V3.3.0 and does not include the ISTI dataset used in the Science paper. The K15 global merged dataset is also not archived nor is it available in machine-readable form. This is why the two boxes in figure 3 are colored red.

The lack of archival of the GHCN-M V3.X and the global merged product is also in violation of Science policy on making data available [link]. This policy states: “Climate data. Data should be archived in the NOAA climate repository or other public databases”. Did Karl et al. disclose to Science Magazine that they would not be following the NOAA archive policy, would not archive the data, and would only provide access to a non-machine readable version only on an FTP server?

For ocean temperatures, the ERSST version 4 is used in the K15 paper and represents a major update from the previous version. The bias correction procedure was changed and this resulted in different SST anomalies and different trends during the last 15+ years relative to ERSST version 3. ERSSTV4 beta, a pre-operational release, was briefed to the NCDC Science Council and approved on 30 September 2014.

The ORR for ERSSTV4, the operational release, took place in the NCDC Science Council on 15 January 2015. The ORR focused on process and questions about some of the controversial scientific choices made in the production of that dataset will be discussed in a separate post. The review went well and there was only one point of discussion on process. One slide in the presentation indicated that operational release was to be delayed to coincide with Karl et al. 2015 Science paper release. Several Science Council members objected to this, noting the K15 paper did not contain any further methodological information—all of that had already been published and thus there was no rationale to delay the dataset release. After discussion, the Science Council voted to approve the ERSSTv4 ORR and recommend immediate release.

The Science Council reported this recommendation to the NCDC Executive Council, the highest NCDC management board. In the NCDC Executive Council meeting, Tom Karl did not approve the release of ERSSTv4, noting that he wanted its release to coincide with the release of the next version of GHCNM (GHCNMv3.3.0) and NOAAGlobalTemp. Those products each went through an ORR at NCDC Science Council on 9 April 2015, and were used in operations in May. The ERSSTv4 dataset, however, was still not released. NCEI used these new analyses, including ERSSTv4, in its operational global analysis even though it was not being operationally archived. The operational version of ERSSTv4 was only released to the public following publication of the K15 paper. The withholding of the operational version of this important update came in the middle of a major ENSO event, thereby depriving the public of an important source of updated information, apparently for the sole purpose of Mr. Karl using the data in his paper before making the data available to the public.

So, in every aspect of the preparation and release of the datasets leading into K15, we find Tom Karl’s thumb on the scale pushing for, and often insisting on, decisions that maximize warming and minimize documentation. I finally decided to document what I had found using the climate data record maturity matrix approach. I did this and sent my concerns to the NCEI Science Council in early February 2016 and asked to be added to the agenda of an upcoming meeting. I was asked to turn my concerns into a more general presentation on requirements for publishing and archiving. Some on the Science Council, particularly the younger scientists, indicated they had not known of the Science requirement to archive data and were not aware of the open data movement. They promised to begin an archive request for the K15 datasets that were not archived; however I have not been able to confirm they have been archived. I later learned that the computer used to process the software had suffered a complete failure, leading to a tongue-in-cheek joke by some who had worked on it that the failure was deliberate to ensure the result could never be replicated.

Where do we go from here?

I have wrestled for a long time about what to do about this incident. I finally decided that there needs to be systemic change both in the operation of government data centers and in scientific publishing, and I have decided to become an advocate for such change. First, Congress should re-introduce and pass the OPEN Government Data Act. The Act states that federal datasets must be archived and made available in machine readable form, neither of which was done by K15. The Act was introduced in the last Congress and the Senate passed it unanimously in the lame duck session, but the House did not. This bodes well for re-introduction and passage in the new Congress.

However, the Act will be toothless without an enforcement mechanism. For that, there should be mandatory, independent certification of federal data centers. As I noted, the scientists working in the trenches would actually welcome this, as the problem has been one of upper management taking advantage of their position to thwart the existing executive orders and a lack of process adopted within Agencies at the upper levels. Only an independent, outside body can provide the needed oversight to ensure Agencies comply with the OPEN Government Data Act.

Similarly, scientific publishers have formed the Coalition on Publishing Data in the Earth and Space Sciences (COPDESS) with a signed statement of commitment to ensure open and documented datasets are part of the publication process. Unfortunately, they, too, lack any standard checklist that peer reviewers and editors can use to ensure the statement of commitment is actually enforced. In this case, and for assessing archives, I would advocate a metric such as the data maturity model that I and colleagues have developed. This model has now been adopted and adapted by several different groups, applied to hundreds of datasets across the geophysical sciences, and has been found useful for ensuring information preservation, discovery, and accessibility.

Finally, there needs to be a renewed effort by scientists and scientific societies to provide training and conduct more meetings on ethics. Ethics needs to be a regular topic at major scientific meetings, in graduate classrooms, and in continuing professional education. Respectful discussion of different points of view should be encouraged. Fortunately, there is initial progress to report here, as scientific societies are now coming to grips with the need for discussion of and guidelines for scientific ethics.

There is much to do in each of these areas. Although I have retired from the federal government, I have not retired from being a scientist. I now have the luxury of spending more time on these things that I am most passionate about. I also appreciate the opportunity to contribute to Climate Etc. and work with my colleague and friend Judy on these important issues.

Postlude

A couple of examples of how the public can find and use CDR operational products, and what is lacking in a non-operational and non-archived product

  1. NOAA CDR of total solar irradiance – this is the highest level quality. Start at web site – https://data.nodc.noaa.gov/cgi-bin/iso?id=gov.noaa.ncdc:C00828

Here you will see a fully documented CDR. At the top, we have the general description and how to cite the data. Then below, you have a set of tabs with extensive information. Click each tab to see how it’s done. Note, for example, that in ‘documentation’ you have choices to get the general documentation, processing documents including source code, data flow diagram, and the algorithm theoretical basis document ATBD which includes all the info about how the product is generated, and then associated resources. This also includes a permanent digital object identifier (doi) to point uniquely to this dataset.

  1. NOAA CDR of mean layer temperature – RSS – one generation behind in documentation but still quite good – https://www.ncdc.noaa.gov/cdr/fundamental/mean-layer-temperature-rss

Here on the left you will find the documents again that are required to pass the CDR operations and archival. Even though it’s a slight cut below TSI in example 1, a user has all they need to use and understand this.

  1. The Karl hiatus paper can be found on NCEI here – https://www.ncdc.noaa.gov/news/recent-global-surface-warming-hiatus

If you follow the quick link ‘Download the Data via FTP’ you go here – ftp://ftp.ncdc.noaa.gov/pub/data/scpub201506/

The contents of this FTP site were entered into the NCEI archive following my complaint to the NCEI Science Council. However, the artifacts for full archival of an operational CDR are not included, so this is not compliant with archival standards.

Biosketch:  

John Bates received his Ph.D. in Meteorology from the University of Wisconsin-Madison in 1986. Post Ph.D., he spent his entire career at NOAA, until his retirement in 2016.  He spent the last 14 years of his career at NOAA’s National Climatic Data Center (now NCEI) as a Principal Scientist, where he served as a Supervisory Meteorologist until 2012.

Dr. Bates’ technical expertise lies in atmospheric sciences, and his interests include satellite observations of the global water and energy cycle, air-sea interactions, and climate variability. His most highly cited papers are in observational studies of long term variability and trends in atmospheric water vapor and clouds.

NOAA Administrator’s Award 2004 for “outstanding administration and leadership in developing a new division to meet the challenges to NOAA in the area of climate applications related to remotely sensed data”. He was awarded a U.S. Department of Commerce Gold Medal in 2014 for visionary work in the acquisition, production, and preservation of climate data records (CDRs). He has held elected positions at the American Geophysical Union (AGU), including Member of the AGU Council and Member of the AGU Board. He has played a leadership role in data management for the AGU.

He is currently President of John Bates Consulting Inc., which puts his recent experience and leadership in data management to use in helping clients improve data management to improve their preservation, discovery, and exploitation of their and others data. He has developed and applied techniques for assessing both organizational and individual data management and applications. These techniques help identify how data can be managed more cost effectively and discovered and applied by more users.

David Rose in the Mail on Sunday

David Rose of the UK Mail on Sunday is working on a comprehensive expose of this issue [link].

Here are the comments that I provided to David Rose, some of which were included in his article:

Here is what I think the broader implications are.  Following ClimateGate, I made a public plea for greater transparency in climate data sets, including documentation.  In the U.S., John Bates has led the charge in developing these data standards and implementing them.  So it is very disturbing to see the institution that is the main U.S. custodian of climate data treat this issue so cavalierly, violating its own policy.  The other concern that I raised following ClimateGate was overconfidence and inadequate assessments of uncertainty.  Large adjustments to the raw data, and substantial changes in successive data set versions, imply substantial uncertainties.  The magnitude of these uncertainties influences how we interpret observed temperature trends, ‘warmest year’ claims, and how we interpret differences between observations and climate model simulations.  I also raised concerns about bias; here we apparently see Tom Karl’s thumb on the scale in terms of the methodologies and procedures used in this publication.

Apart from the above issues, how much difference do these issues make to our overall understanding of global temperature change?  All of the global surface temperature data sets employ NOAA’s GHCN land surface temperatures.  The NASA GISS data set also employs the ERSST datasets for ocean surface temperatures.  There are global surface temperature datasets, such as Berkeley Earth and HadCRUT that are relatively independent of the NOAA data sets, that agree qualitatively with the new NOAA data set.  However, there remain large, unexplained regional discrepancies between the NOAA land surface temperatures and the raw data.  Further,  there are some very large uncertainties in ocean sea surface temperatures, even in recent decades.  Efforts by the global numerical weather prediction centers to produce global reanalyses such as the European Copernicus effort is probably the best way forward for the most recent decades.

Regarding uncertainty, ‘warmest year’, etc.  there is a good article in the WSJ: Change would be healthy at U.S. climate agencies (hockeyshtick has reproduced the full article).

I also found this recent essay in phys.org to be very germane:   Certainty in complex scientific research an unachievable goal. Researchers do a good job of estimating the size of errors in measurements but underestimate chance of large errors.

Backstory

I have known John Bates for about 25 years, and he served on the Ph.D. committees of two of my graduate students.  There is no one, anywhere, that is a greater champion for data integrity and transparency.

When I started Climate Etc., John was one of the few climate scientists that contacted me, sharing concerns about various ethical issues in our field.

Shortly after publication of K15, John and I began discussing our concerns about the paper.  I encouraged him to come forward publicly with his concerns.  Instead, he opted to try to work within the NOAA system to address the issues –to little effect.  Upon his retirement from NOAA in November 2016, he decided to go public with his concerns.

He submitted an earlier, shorter version of this essay to the Washington Post, in response to the 13 December article (climate scientists frantically copying data).  The WaPo rejected his op-ed, so he decided to publish at Climate Etc.

In the meantime, David Rose contacted me about a month ago, saying he would be in Atlanta covering a story about a person unjustly imprisoned [link]. He had an extra day in Atlanta, and wanted to get together.  I told him I wasn’t in Atlanta, but put him in contact with John Bates.  David Rose and his editor were excited about what John had to say.

I have to wonder how this would have played out if we had issued a press release in the U.S., or if this story was given to pretty much any U.S. journalist working for the mainstream media.  Under the Obama administration, I suspect that it would have been very difficult for this story to get any traction.  Under the Trump administration, I have every confidence that this will be investigated (but still not sure how the MSM will react).

Well, it will be interesting to see how this story evolves, and most importantly, what policies can be put in place to prevent something like this from happening again.

I will have another post on this topic in a few days.

Being retired sure is liberating . . .

Moderation note:  As with all guest posts, please keep your comments civil and relevant.


Filed under: Data and observations, Ethics
07 Feb 02:01

Microaggressions, Macro Debate

by Musa Al-Gharbi

The concept of microaggressions gained prominence with the publication of Sue et al.’s 2007, Racial Microaggressions in Everyday Life,” which defined microaggressions as communicative, somatic, environmental or relational cues that demean and/or disempower members of minority groups in virtue of their minority status. Microaggressions, they asserted, are typically subtle and ambiguous. Often, they are inadvertent or altogether unconscious. For these reasons, they are also far more pervasive than other, more overt, forms of bigotry (which are less-tolerated in contemporary America).

The authors propose a tripartite taxonomy of microaggressions:

  • Microassaults involve explicit and intentional racial derogation;
  • Microinsults involve rudeness or insensitivity towards another’s heritage or identity;
  • Microinvalidations occur when the thoughts and feelings of a minority group member seem to be excluded, negated or nullified as a result of their minority status.

The authors then present anecdotal evidence suggesting that repeated exposure to microaggressions is detrimental to the well-being of minorities. Moreover, they assert, a lack of awareness about the prevalence and impact of microaggressions among mental health professionals could undermine the practice of clinical psychology—reducing the quality and accessibility of care for those who may need it most.

Towards the conclusion, however, the authors acknowledge the “nascent” state of research on microaggressions and call for further investigation. They emphasize that future studies should focus first and foremost on empirically substantiating the harm caused by microaggressions, and documenting how people cope (or fail to cope) with experiencing them. They further suggest research should probe whether or not there is systematic variation as to who incurs microaggressions, which type or types of microaggressions particular populations tend to endure, how harmful microaggressions are to different groups and in different circumstances, and in which contexts microaggressions tend to be more or less prevalent. Finally, the authors recommend expanding microaggression research to include incidents against gender and sexual minorities, and those with disabilities.

The State of Microaggression Research Today

In the decade following Sue et al.’s landmark paper, there have been extensive discussions about microaggressions—among practitioners, in the academic literature, and increasingly, in popular media outlets and public forums. But unfortunately, very little empirical research has been conducted to actually substantiate the ubiquity of microaggressions, to catalog the harm they cause, or to refine the authors’ initial taxonomy.

In “Microaggressions: Strong Claims, Inadequate Evidence,” published in the latest issue of Perspectives on Psychological Science, HxA member Scott Lilienfeld highlights five core premises undergirding the microaggression research program (MRP):

  1. Microaggressions are operationalized with sufficient clarity and consensus to afford rigorous scientific investigation.
  2. Microaggressions are interpreted negatively by most or all minority group members.
  3. Microaggressions reflect implicitly prejudicial and implicitly aggressive motives.
  4. Microaggressions can be validly assessed using only respondents’ subjective reports.
  5. Microaggression exert an adverse impact on recipient’s mental health.

His comprehensive meta-analysis suggests that there is “negligible” support for these axioms—individually or (especially) collectively.

However, Lilienfeld emphasizes that an absence of evidence regarding the prevalence and harm of microaggressions should not be interpreted as evidence of absence. Over the course of the essay he repeatedly asserts that it is “undeniable” that minorities regularly experience slights which could be construed as microaggressions; he acknowledges that these incidents are often deeply unpleasant or unsettling for affected minorities, and likely harmful in aggregate. Nonetheless, important research questions remain, namely: how harmful are microaggressions, for whom, in what ways and under what circumstances?

These are not just a matters of intellectual curiosity, but instead, prerequisites for crafting effective responses, evaluating attempted interventions, and minimizing iatrogenesis along the way. It is similarly critical to clarify and substantiate claims about microaggressions for the sake of blunting skepticism and resistance—particularly from those whose identity, perceived interests and routines are most likely to be challenged by reforms in social norms, practices and policies (i.e. those who are white, native-born, heterosexual, able in body and mind, economically-comfortable and/or men). Finally, it is essential to the continued integrity and credibility of social research that basic evidentiary standards be met—especially for strong psychological claims— particularly in light of how prominent and politicized the issue of microaggressions has become.

In other words, it is in everyone’s interest to address the profound conceptual and evidentiary shortcomings of the MRP literature to date.

Evidentiary Gaps

According to Lilienfeld, one of the most striking aspects of microaggression research is that over the course of nearly ten years, the literature has hardly advanced beyond the taxonomy and methods laid out in the original paper.

For instance, with regards to demonstrating the harm caused by microaggressions, there has been very little engagement with contemporary cognitive or behavioral research—and virtually no experimental testing. Instead, advocates have relied almost exclusively on small collections of anecdotal testimonies, from samples that are neither randomized nor established as representative of any particular population. This is problematic, Lilienfeld asserts, because the preponderance of contemporary social psychological research strongly suggests that the perception of, and response to, microaggressions would vary a great deal between and within minority populations as a result of individuals’ particular situational, cognitive, psychological, cultural, and personality traits.

It is important to account for these factors in order to isolate and better measure the potential harm caused by microaggressions. Identifying the impact of particular traits on microaggression response could also help researchers determine who is most sensitive to perceiving microaggressions, and who is most adversely affected by them—allowing for tailored interventions to better assist those who are particularly vulnerable.

Meanwhile, collecting information on the base-rates of microaggressions can help researchers identify exemplary environments where these incidents seem relatively rare, as well as environments which seem especially toxic. This can help prioritize interventions and provide models for reform. Base-rate information is also essential for evaluating whether particular interventions seem to be increasing, decreasing or failing to impact the prevalence of microaggressions…not to mention determining how bad the problem is to begin with.

Conceptual Problems

Beyond the evidentiary gaps, Lilienfeld asserts that one of the biggest problems with microaggression literature is the lack of clarity on exactly what does constitute a microaggression, what does not, and in virtue of what.

For instance, Lilienfeld argues that microassaults should probably be struck from the taxonomy: the examples provided in the literature tend not to be “micro” at all, but outright assaults, intimidation, harassment and bigotry–even rising to the level of crimes in some instances. In contrast with microinvalidations or microinsults, microassaults are necessarily overt, intentional and hateful acts. Including these types of incidents as “microaggressions” pollutes conceptual clarity…as does the term “microaggression” itself.

“Aggression” implies hostile intent. Yet microaggressions, as defined in the literature, tend to involve neither hostility nor intent. Most violations are microinsults and microinvalidations—which are typically unintentional slights resulting from ignorance, insensitivity or unconscious bias among people of good-will. By classifying such incidents as “microaggressions,” those who commit these faux pas as “perpetrators” and those who experience them as “victims” all parties involved become disposed towards responding to incidents in a confrontational rather than conciliatory fashion, as both sides feel unfairly maligned or mistreated. Lilienfeld suggests advocates would be better served by revising terms and concepts to better capture the indirect and typically inadvertent nature of the phenomena in question.

However, further refinement will also be necessary. At the moment, the concept is so inclusive that even those committed to doing the “right thing” often find themselves in impossible situations:

Assume a white teacher puts forward a question and a number of students raise their hands in response—including some minority students. According to microaggression literature, if the teacher fails to call on the minority student(s), this could be interpreted as a microaggression. However, deciding to call on a minority student would merely create a new dilemma: if the instructor criticizes or challenges any aspect of the student’s response, this could also be construed as a microaggression. On the other hand, if the teacher praises the student’s answer as insightful or articulate, this might also be considered a microaggression.

That is, for those who are “privileged” (i.e. white, native-born, heterosexual, able in body and mind, economically-comfortable and/or a man), virtually anything one says or does could be construed as a microaggression. In such a climate it may seem desirable or even necessary for many to minimize interactions with those outside their identity group(s) in order to avoid needless (but otherwise seemingly inevitable) conflict. This is a major problem given that, according to Sue et al., the main purpose of the MRP is to foster broader and deeper openness, understanding, dialogue and cooperation.

Derald Wing Sue Responds

In “Racial Microaggressions in Everyday Life” Sue and his collaborators acknowledged the need for further empirical research on microaggressions, and suggested avenues future work should prioritize. Lilienfeld has argued that these recommendations have gone largely unheeded, and as a result, many of the authors’ claims remain just as tenuous in 2017 as they were in 2007.

In a rejoinder, entitled “Microaggressions and ‘Evidence’: Empirical or Experiential Reality,” Sue declines to contest Lilienfeld’s overall picture. In fact, he acknowledges that the critiques are generally valid—adding that he actually shares many of the concerns Lilienfeld raised about the state of microaggression research.

Given this apparently broad agreement between Sue and Lilienfeld, most of the rest of the rejoinder proves perplexing. For instance, despite having called for further empirical research on multiple occasions himself, Sue claims (without support) that highlighting conceptual or evidentiary gaps in the MRP somehow undermines or negates the phenomenological significance of microaggressions—an assertion which is particularly baffling given that Lilienfeld repeatedly calls for greater emphasis and attention to the subjective reality of microaggressions.

Sue then insists that psychology, science and empiricism are not the only ways of understanding human experience, nor are they necessarily the best method(s) in every instance. Of course, one anticipates Lilienfeld would simply agree with this point—albeit while insisting that context also matters with regards to which tools or frameworks are most useful or important:

In the 2007 essay and subsequent works, Dr. Sue was speaking as a psychologist, and relying on his credentials as a psychologist to publish and disseminate his work—often in journals related to the social and behavioral sciences or the clinical practice of psychology /psychiatry. Engaging in these capacities entails agreeing to the evidentiary, methodological and ethical norms or standards of one’s chosen profession or field. Lilienfeld was arguing that the current state of research on microaggressions seems to fall short in these regards—nothing more, nothing less. So there is a sense in which Sue’s response, evoking questions about the ultimate nature of truth or humanity, is more-or-less irrelevant to Lilienfeld’s claims.

More broadly, it seems disingenuous for Sue to put forward microaggression research as scientific when this seems to lend credibility to his project, but then claim microaggressions need not be subject to empirical validation when faced with criticism.

Ultimately, Sue responds directly to only one of Lilienfeld’s 18 recommendations—namely that until microaggressions are better understood, we should be conservative in executing policies intended to address them. This notion is condescendingly dismissed with an assertion that only “[t]hose in the majority group, those with power and privilege, and those who do not experience microaggressions are privileged to enjoy the luxury of waiting for proof.”

Such a reply is striking given the long and ignoble history of harm caused by hastily applied (and often later discredited) social and psychological research—with the costs borne primarily by women, people of color, the poor and other vulnerable populations. In other words, Lilienfeld’s advice should not be understood as an expression of privilege: guarding against iatrogenesis and adverse second order effects is important, including for minorities—perhaps especially for minorities.

In this instance, it seems highly plausible that poorly conceived or implemented policies intended to address microaggressions could endanger the free exchange of ideas, lead to unjustly severe consequences for minor (even unintentional) infractions, heighten animus between minority and majority groups, or even exacerbate the harm caused by microaggressions (for instance by making already vulnerable individuals even more sensitive to perceived slights or injustices).

In virtually any of these eventualities everyone—including minorities—may be worse off than before. This prospect seems to warrant more than a snarky retort about Lilienfeld’s supposed privilege—especially given the lack of reliable data about the prevalence and harm of microaggressions, which could otherwise help to avoid these unfortunate outcomes by enabling more nuanced policy responses from university administrators.


Opinions expressed are those of the author(s). Publication does not imply endorsement by Heterodox Academy or any of its members. We welcome your comments below. Feel free to challenge and disagree, but please try to model the sort of respectful and constructive criticism that makes viewpoint diversity most valuable. Comments that include obscenity or aggression are likely to be deleted.

07 Feb 01:41

Conflating the climate problem with the solution

by curryja

“one of the real tragedies that totally distorted the debate over climate change was that it got tied into the solution in a way that if you accepted the first you had to accept the second. And I think that was profoundly wrong.” – Newt Gingrich

At the annual meeting of the National Council for Science and the Environment (NCSE), Newt Gingrich made a very interesting presentation, which was reported in an EOS article.  Excerpts:

Gingrich was not the usual fare for an NCSE conference, and he acknowledged that at the beginning of his talk, saying, “I realize that, particularly with all of the changes of the last few days, that having a right-wing Republican show up [at this conference] is probably not what all of you have signed up for.”

When asked why Gingrich was invited to speak, NCSE provided Eos with a written statement referring to the “shifting political landscape” and the importance of “hearing all perspectives.” 

Former House Speaker urges thoughtful, aggressive, articulate arguments to influence an administration that he says generally lacks its own plan.

“You can hunker down and decide you want to be oppositionist and that you are going to hate everything and life will be terrible,” or you can dig in and work with the administration, said Gingrich, a former speaker of the House of Representatives.

“If you go in aggressive enough and articulate enough and have thought it through enough, you are going to shape large parts of this administration,” he said.

Gingrich in his presentation argued that the new administration has a focus on science, engineering, and technology. He pointed to Trump’s inaugural address, which calls for “unlock[ing] the mysteries of space” and “harness[ing] the energies, industries and technologies of tomorrow.”

“Part of your challenge here is for you to feed back to them and say, ‘Look, if you want to achieve these goals, this is the kind of investment that you have to make,’” he told the audience.

However, he said, there can be measures that move toward sustainability that are compatible with the administration’s goals. He pointed to Tesla as an example and the increasing popularity of electric vehicles.

After his speech, Gingrich told Eos that Trump should balance America’s economic interests related to climate change. Gingrich added, though, “I’m very skeptical of the stuff that Obama agreed to” in dealing with climate change.

JC reflections

Well, one good thing that is emerging from the Trump administration is increasing open mindedness in scientific and environmental organizations.  Hard to imagine Newt Gingrich being invited to such a meeting under the Obama administration.

I was particularly struck by Gingrich’s statement:

“one of the real tragedies that totally distorted the debate over climate change was that it got tied into the solution in a way that if you accepted the first you had to accept the second. And I think that was profoundly wrong.”

Citizens understand this, but apparently many scientists do not.  At the scientist demonstration for global warming at the AGU meeting, the slogan was something like this:

  • It’s warming
  • It’s caused by us
  • It’s dangerous
  • We can do something about it

The ‘problem’ of global warming is utterly conflated with its ‘solution’, in the eyes of many climate scientists, not to mention the UNFCCC.  I have argued many times that we have oversimplified by the problem of global warming and its solution.

Once you break the link in the reasoning described in the above bullets, we may have a chance at developing a true understanding of climate variability and change, how extreme weather and slow climate change influences societies and ecosystems, and the most effective ways of dealing with the regional impacts of climate variability and extreme events that accounts for a regions specific vulnerabilities and socioeconomic situation.

The other important point made by Gingrich is that Trump is very interested in science and technological advances.  Fleshing out details of any such plans haven’t begun to happen — which has resulted in Trump being called anti-science because climate science, etc. isn’t at the top of his agenda.  Trump seems to have a relatively open mind so there seems to be much opportunity for rational and well argued inputs to be provided.

Instead, we see scientists marching on DC (exactly towards what end, I haven’t been able to figure it out).

JC message to scientists:  start behaving like scientists, and make your arguments for why you think something is important and why it should be funded.  Whining and playing politics doesn’t look like it will help your cause.


Filed under: Policy, Politics
06 Feb 20:48

Will Liberals Learn to Love the 10th Amendment?

by Damon Root
Remlaps

ht Whig Zhou

In the 1997 case Printz v. United States, the U.S. Supreme Court ruled it unconstitutional for the federal government to direct state and local law enforcement officers to enforce certain provisions of the 1993 Brady Handgun Violence Prevention Act.

"The Federal Government may neither issue directives requiring the States to address particular problems," the late Justice Antonin Scalia wrote in his majority opinion, "nor command the States' officers, or those of their political subdivisions, to administer or enforce a federal regulatory program." In short, Printz held, the feds may not commandeer the states for federal purposes.

At the time it was decided, Printz was criticized by many liberals for being a "conservative" decision that promoted states' rights at the expense of duly enacted national reforms. In other words, they saw it as a case of the 10th Amendment run amok.

Liberals today are more likely to view Scalia's handiwork in a far more favorable light. That's because Printz now serves as perhaps the single best legal precedent in support of the constitutionality of so-called sanctuary cities—municipalities that either won't help the federal government round up and deport undocumented immigrants or otherwise refuse to participate in the enforcement of federal immigration laws.

Sanctuary cities have become a hot topic since the election of Donald Trump. Less than a week after Trump won, New York Gov. Andrew Cuomo took to Facebook with a defiant message for the incoming administration. "We won't allow a federal government that attacks immigrants to do so in our state," he declared. Chicago Mayor Rahm Emanuel was equally blunt: The Windy City, he said, "will always be a sanctuary city." Los Angeles Police Chief Charlie Beck announced that his department was "not going to work in conjunction with Homeland Security on deportation efforts. That is not our job, nor will I make it our job."

Federal authorities retain their own power to enforce national laws in those places. But the lack of meaningful local cooperation is no small hindrance. In effect, these cities are a bulwark against the far-reaching national agenda of border hawks in Washington.

If you like the sound of that, take a moment to thank Justice Scalia. As he made clear in Printz, "federal commandeering of state governments" goes against the text, structure, and history of the Constitution. Trump may not want to hear it, but "such commands are fundamentally incompatible with our system of dual sovereignty."

06 Feb 20:48

Global Temperature Update

by admin
Remlaps

h/t Whig Zhou

I just updated my climate presentation with data through December of 2016, so given "hottest year evah" claims, I thought I would give a brief update with the data that the media seldom ever provides.  This is only a small part of my presentation, which I will reproduce for Youtube soon (though you can see it here at Claremont-McKenna).  In this post I will address four questions:

  • Is the world still warming?
  • Is global warming accelerating?
  • Is global warming "worse than expected"?
  • Coyote, How Is Your Temperature Prediction Model Doing?

Is the world still warming:  Yes

We will use two data sets.  The first is the land surface data set from the Hadley Center in England, the primary data set used by the IPCC.  Rather than average world absolute temperature, all these charts show the variation or "anomaly" of that absolute temperature from some historical average (the zero point of which is arbitrary).  The theory is that it is easier and more accurate to aggregate anomalies across the globe than it is to average the absolute temperature.  In all my temperature charts, unless otherwise noted, the dark blue is the monthly data and the orange is a centered 5-year moving average.

You can see the El Nino / PDO-driven spike last year.  Ocean cycles like El Nino are complicated, but in short, oceans hold an order of magnitude or two more heat than the atmosphere.  There are decadal cycles where oceans will liberate heat from their depths into the atmosphere, creating surface warming, and cycles where oceans bury more heat, cooling the surface.

The other major method for aggregating global temperatures is using satellites.  I use the data from University of Alabama, Huntsville.

On this scale, the el nino peaks in 1999 and 2017 are quite obvious.  Which method, surface or satellites, gets a better result is a matter of debate.  Satellites are able to measure a larger area, but are not actually measuring the surface, they are measuring temperatures in the lower tropospehere (the troposphere's depth varies but ranges from the surface to 5-12 miles above the surface).  However, since most climate models and the IPCC show man-made warming being greatest in the lower troposphere, it seems a good place to measure.  Surface temperature records, on the other hand, are measuring exactly where we live, but can be widely spaced and are subject to a variety of biases, such as the urban heat island effect.  The station below in Tucson, located in a parking lot and surrounded by buildings, was an official part of the global warming record until my picture became widely circulated and embarrassed them in to closing it.

This argument about dueling data sets goes on constantly, and I have not even mentioned the issues of manual adjustments in the surface data set that are nearly the size of the entire global warming signal.  But we will leave these all aside with the observation that all data sources show a global warming trend.

Is Global Warming Accelerating?  No

Go into google and search "global warming accelerating".  Or just click that link.  There are a half-million results about global warming accelerating.  Heck, Google even has one of those "fact" boxes at the top that say it is:

It is interesting by the way that Google is using political advocacy groups for its "facts" nowadays.

Anyway, if global warming is so obviously accelerating that Google can list it as a fact at the top of its search page, it should be obvious from the data, right?  Well let's look.  First, here is the satellite data since I honestly believe it to be of higher quality than the surface records:

This is what I call the cherry-picking chart.  Everyone can find a peak for one end of their time scale and a valley for the other and create whatever story they want.  In economic analysis, to deal with the noise and cyclicality, one will sometimes see economic growth measured peak-to-peak, meaning from cyclical peak to the next cyclical peak, as a simple way to filter out some of the cyclicality.  I have done the same here, taking my time period as about 18 years from the peak of the 1999 El Nino to 2017 and the peak of the recent El Nino.  The exact data used for the trend is show in darker blue.  You can decide if I have been fair.

The result for this time period is a Nino to Nino warming trend of 0.11C.  Now let's look at the years before this

So the trend for 36 years is 1.2C per century but the trend for the last half of this is just 0.11C.  That does not look like acceleration to me.  One might argue that it may again accelerate in the future, but I cannot see how so many people blithely treat it as a fact that global warming has been accelerating when it clearly has not.  But maybe its just because I picked those darn satellites.  Maybe the surface temperatures show acceleration?

Nope.  Though the slow down is less dramatic, the surface temperature data never-the-less shows the same total lack of acceleration.

Is Global Warming "Worse Than Expected"?  No

The other meme one hears a lot is that global warming is "worse than expected".  Again, try the google search I linked.  Even more results, over a million this time.

To tackle this one, we have to figure out what was "expected".  Al Gore had his crazy forecasts in his movie.  One sees all kinds of apocalyptic forecasts in the media.  The IPCC has forecasts, but it tends to change them every five years and seldom goes back and revisits them, so those are hard to use.  But we have one from James Hansen, often called the father of global warming and Al Gore's mentor, from way back in 1988.  His seminal testimony in that year in front of Congress really put man-made global warming on the political map.  Here is the forecast he presented:

Unfortunately, in his scenarios, he was moving two different variables (CO2 levels and volcanoes) so it is hard to tell which one applies best to the actual history since then, but we are almost certainly between his A and B forecasts.  A lot of folks have spent time trying to compare actual temperatures to these lines, but it is very hard.  The historical temperature record Hansen was using has been manually adjusted several times since, so the historical data does not match, and it is hard to get the right zero point.  But we can eliminate the centering issues altogether if we just look at slopes -- that is all we really care about anyway.  So I have reproduced Hanson's data in the chart on the left and calculated the warming slopes in his forecast:

As it turns out, it really does not matter whether we choose the A or B scenario from Hansen, because both have about the same slope -- between 2.8C and 3.1C per century of warming from 1986 (which appears to be the actual zero date of Hansen's forecast) and today.  Compare this to 1.8C of actual warming in the surface temperature record for this same period, and 1.2C in the satellite record.  While we have seen warming, it is well under the rates predicted by Hansen.

This is a consistent result to what the IPCC found in their last assessment when they evaluated past forecasts.  The colored areas are the IPCC forecast ranges from past forecasts, the grey area was the error bar (the IPCC is a bit inconsistent when it shows error bars, including error bands seemingly only when it helps their case).  The IPCC came to the same result as I did above:   that warming had continued but was well under the pace that was "expected" form past forecasts.

By the way, the reason that many people may think that global warming is accelerating is because media mentions of global warming and severe weather events has been accelerating, leaving the impression that things are changing faster than they truly are.  I wrote an article about this effect here at Forbes.  In that I began:

The media has two bad habits that make it virtually impossible for consumers of, say, television news to get a good understanding of trends

  1. They highlight events in the tail ends of the normal distribution and authoritatively declare that these data points represent some sort of trend or shift in the mean
  2. They mistake increases in their own coverage of certain phenomenon for an increase in the frequency of the phenomenon itself.

Coyote, How Is Your Temperature Prediction Model Doing?  Great, thanks for asking

Ten years ago, purely for fun, I attempted to model past temperatures using only three inputs:  A decadal cyclical sin wave, a long-term natural warming trend out of the little ice age (of 0.36 C per century), and a man-made warming trend really kicking in around 1950 (of 0.5C per century).  I used this regression as an attribution model, to see how much of past warming might be due to man (I concluded about half of 20th century warming may be due to manmade effects).  But I keep running it to test its accuracy, again just for fun, as a predictive tool.  Here is where we are as of December of 2016 (in this case the orange line is my forecast line):

Still hanging in there:  Despite the "hottest year evah" news, temperatures in December were exactly on my prediction line.  Here is the same forecast with the 5-year centered moving average added in light blue:

03 Feb 04:47

The Psychology Behind High School - From the Viewpoint of a High Schooler

As I have entered high school this year, I have noticed many changes in my peers, and classmates. Many people began to drink coffee and start cursing every other sentence, and change their looks by getting nose-rings, dying their hair, or getting ear rings (for some guys). I am not saying these things are bad.  It's not my style, but it's their life. However, I have noticed that it happened all at once in the last year, so I've decided to take a look into the psychology behind these changes. 


Rebellion

Children do these things as a form of rebellion against cultural standards, and higher authorities (parents). 

I have learned that my peers, and many other high school students go through this change as a way of trying to grow up and become more independent from those who support them. 

This can be dangerous for a child's future though, because as they try to fit in, they might give up on the activities that had been important to them growing up (like playing piano) because their friends aren't interested in them continuing, but their parents are. (Parents unhappy = Friends happy) 


Now, why drink coffee, dye your hair, or get body piercings? This is another form of rebellion against authorities, but it is also another form of rebellion. It is rebellion against the "norms" of society for this age group. High school kids do this to feel unique from their peers, feel "separate" from society, and closer to those who they "hang out" with. When they go through these changes, they feel more like themselves, and less like the people their parents raised them to be. 

This can be harmful, though, because it can ruin relationships that they may have had with family members, and some of their friends. They might do something to themselves that they will later regret.;


Here's an example: from the time you were eight your mother told you not to curse because it is impolite, you enter high school, and want to appeal to those around you, what do you do? Do what your mother and their mothers told each of you not to do from the time you were eight. For some reason, people (not just kids) are attracted to these rebellious behaviors. So to "fit in," you almost certainly have to rebel against higher authorities. 


Changes in Different Age Groups

13-15 - At this point, it is necessary to allow children to experiment with who they are and opposition is expressed in order to gather power of self-determination. Children will learn less from having their parents yell at them constantly that they are making mistakes, and more by seeing the consequences of their mistakes unfold before their eyes. Parents should offer positive reinforcement and advice, to help their children figure out what the difference is between right and wrong.


15-18 - During these years, children often try to escape the title of "good child" to their parents. This is the point where children realize that graduation is coming, and begin to try to demonstrate that they don't need their parents anymore through the "art" of risk taking. This is a dangerous stage, because some children are arrogant, and may make decisions that can hurt them in the long run. So parents need to designate more independence to the child while, at the same time, expecting them to be responsible. They should provide calm and clean expectations about any risk taking the child may or may not be involved in.


18-23 - This is the stage where most young adults have  shaken off higher authority, yet some are still rebelling, this time against personal responsibility (themselves). They might know what is right, and decide not to do it anyway. This is usually the last stage of youthful rebellion, and becoming an adult.  Unfortunately, some people never grow out of this phase of life. 



   Now, I am going to tell you of someone who my parents went to high school with, who sadly didn't make it past the last phase of youthful rebellion. 

    In this sports Illustrated article, Greg Rowles is described as ". . .The type of guy who would shake up a room when he walked through the door." ". . .The center of attention, without ever trying to place himself there." "People rallied around him, they followed his lead." "He was known for his sarcastic humor and his boundless energy, and he had a one-liner for every occasion." "Among his friends, he was known as the funniest man in North America."

Greg was obviously well known, and well liked. He had everything going for him, until he decided to get behind the wheel of a car after drinking. (This is an example of knowing something is wrong, rebelling against yourself, and doing it any way). Greg's father felt dismayed and angry at the stupidity involved in his son's death, as were many of his close friends. 


Sources: Source for All Images: [Image Source: pixabay.com, License: CCO Public Domain] 


Thanks for reading this! It took a while to write, but I have been meaning to touch on this topic for months. I (my dad) finally figured out how to put words next to images in HTML, so yay! Let me know your opinions on this article, and please remember that not every person who does the things I used as examples on this list are going through a state of rebellion, some are, and some feel that these changes help them to be happier in life. Please remember to check back later!

Also remember to check for: My weekly 7 post, [Something else will eventually go into this space]!


03 Feb 04:43

Faster, Please

by Kate

Related: Our Sore Loser Elites Are Losing Their Minds.

01 Feb 15:42

This disposable drone made from cardboard can deliver medicines where people can't come

A drone is fairly expensive so they typically are not disposable. And since a drone needs a battery to fly back home it's mostly not suitable to use over a long distance. But what do you do when a group of people need medicine where there are no roads? https://otherlab-com-prod.s3.amazonaws.com/images/Otherlab_SkyMachines_APSARA.jpg ### The emergency relief drone that may one day save your life. The aerodynamics research group at Otherlab in San Francisco developed a disposable drone made from cardboard that can be used to deliver medicines at places where people can hardly come. Think of a disease outbreak in an area where the terrain conditions are so bad that it would take far to much time to reach the area by truck. The research engineers at Otherlab received a grant to come up with a solution for this problem. What they made might be the world's most advanced paper airplanes. The emergency relief drone is made of cheap cardboard and has no built-in motor or battery. Therefore the drone needs to be launched from an aircraft or from another drone and it can't make a return trip home. https://otherlab-com-prod.s3.amazonaws.com/images/APSARA04.png The paper glider has a small built-in mini computer as well as sensors to deliver the vaccines or other medical supplies to it's programmed location. It's possible to load an aircraft with several hundred paper gliders, each loaded with medicines and it's own delivery coordinates preprogrammed. In this way it would allow one airplane to conduct delivery operations covering an area the size of California or Italy. ### Video https://www.youtube.com/watch?v=CPpOAhyliBA *Click the image to play the video.* Besides the needed medicines kids also have a nice paper glider to play with after the drop. More info: [otherlab.com](https://otherlab.com/blog/post/industrial-paper-airplanes-for-autonomous-aerial-delivery) --- *This is a 100% Steem Power post!* ¯\\___(ツ)____/¯ *Don't miss out on my next post! Follow @penguinpablo*
31 Jan 22:33

Kim Dotcom: Megaupload 2 Delay Due To ‘Failed Merger’, Bitcoin Price Affected

by CoinTelegraph

Kim Dotcom has revealed the reason for delaying his Megaupload 2/ Bitcache unveiling was due to a failed merger with a Canadian company SecureCom.

31 Jan 22:07

Steemit Founder: Bitcoin Mining Model is Fountain of Youth for Mainstream Media

by CoinTelegraph

Cointelegraph spoke to Ned Scott, Steemit Founder, about the company’s new roadmap, the lessons from the hack, fake news, censorship and the future of media.

30 Jan 14:24

Ep. 835 Entrepreneur Starts Network of Private Schools; Outperforms, Underspends Public Schools

by Tom Woods

Entrepreneur and Mises Institute benefactor Bob Luddy grew frustrated trying to work within the system, and eventually established a series of private schools whose results have been outstanding. We get the details in today’s episode.

Sponsor

No more ill-fitting suits off the $99 clearance rack. Get a beautiful custom suit from IndoChino for just $389 (50% off) and free shipping when you visit IndoChino.com and enter code WOODS at checkout.

School Link

Thales Academy

Video Mentioned

Related Episodes

Ep. 777 Three Scams: Higher Education, “More Technology in the Classroom,” and Leftist Comedians (Brett Veinotte)
Ep. 749 Education Without the State
Ep. 623 The End of School: Reclaiming Education from the Classroom (Zachary Slayback)
Ep. 390 Crimes of the Educators: Why Education Is More Screwed Up Than You Think (Alex Newman)
Ep. 315 The Origins of State Education: Myth and Reality
Ep. 303 School vs. Education (Brett Veinotte)

Toms Free E-Book

Education Without the State

Free Resources!

1) Free guide on how to start your blog or website. Click here to get it. Plus, check out my step-by-step video taking you from no blog to a blog in about five minutes!

2) Free publicity for your blog. As a special thanks if you get your hosting through one of my affiliate links (this one for Bluehost, or this one for WP Engine), I’ll boost your blog. Click here for details.

3) Free History Course: The U.S. Presidents — Politically Incorrect Edition. Get access to this 22-lesson course: 22 videos, 22 mp3 files for listening on the go, and a bibliography of reliable books on the presidents. Get it at FreeHistoryCourse.com!.

4) $160 in Free Bonuses. Free signed copy of my New York Times bestseller The Politically Incorrect Guide to American History, plus a free 10-lesson bonus course on the foundations of liberty, plus a free year’s subscription to LibertyClassroom.com, when you subscribe to the Ron Paul Curriculum site through RonPaulHomeschool.com.

5) Free Books. Boost your intellectual ammunition with my free libertarian eBooks, including 14 Hard Questions for Libertarians — Answered, Bernie Sanders Is Wrong, and Education Without the State. Find them at TomsFreeBooks.com.

Download Audio
30 Jan 13:30

No, Most Restaurants Don’t Fail In The First Year

by Adam Ozimek
Remlaps

h/t Whig Zhou

A woman walks past a sign advertising a restaurant on the boardwalk in Atlantic City, New Jersey, on May 8, 2016. (JEWEL SAMAD/AFP/Getty Images)

A piece of conventional wisdom about restaurants is that most of them close in the first year. An American Express commercial even warned that 90% failed in the first year. This is, to put it simply, false.

For a long time this question has been handled with anecdote or small spotty datasets. But two economists decided to settle this decisively in 2014 using BLS data that covers 98% of U.S. businesses, the QCEW dataset. They tracked single-establishment restaurants from 1992 to 2011, and the study overall is very careful and well-done.

What they find is that only 17% of restaurants close in the first year, not 90%. This is in fact a lower failure rate than other service providing businesses, where 19% fail in the first year. For comparison, they find that 21% of offices of real estate agents and brokers fail in the first year, and the number is 19% for both landscapers and automotive repair. The failure rate for full-service restaurants is the same as the failure rate for insurance agencies and brokerages.

Part of restaurants reputation may be due to smaller startups, which fail more often. Restaurants with 20 or fewer employees fail more often than other service business, but those with 21 or more employees have a median lifespan that is 9 months longer than other businesses of the same size.

The data also show that, as with many industries, the death rates of restaurants has fallen over time. This means a less dynamic economy and may not be a healthy sign overall, but it does mean that the high failure rate myth is less true than ever before.

Finally, it’s important to note that while I use “failure” and “closure” interchangeably here, a closing restaurant is not necessarily an unsuccessful one. It could be the case that the restaurant was doing well, but family or health problems forced a closure. Or maybe the location was successful but the building was sold for another use. Or perhaps the owners were making money but decided they wanted to do something else. In fact, a 2003 study found that 29% of businesses who closed reported they were successful at the time of closure.

Overall, the two economists using BLS data rightly conclude with the following:

“Perhaps due to the visibility and volume of restaurant startups, the public perception is that restaurants often fail. However, as shown in this paper, restaurant turnover rates are not very different from startups of many other different industries.”

This result is not that surprising. Given that leisure and hospitality has consistently been a growing part of the economy for literally the last half century, it would be surprising if it was the disastrous investment implied in the fake 90% statistic.

30 Jan 05:53

If America Is Based on ‘White Supremacy,’ Why Do Millions of Nonwhites Flock Here?

by Tom Woods

Today’s Tom Woods Letter, which all the influential people receive every weekday. Be one of them.

A flyer circulating at the University of Kansas warns people about “neo-Nazis,” adding that such people often like to conceal their true identity by using other terms and phrases to describe themselves.

Therefore, the flyer went on, be on the watch for people calling themselves “anarcho-capitalists” or using the phrase “Make America Great Again.”

So if you’re an anarcho-capitalist — which means you absolutely oppose the initiation of violence — you are actually a neo-Nazi.

(Because we all know how philosophically opposed to violence the Nazis were.)

This particular inanity is brought to you by the folks who are convinced they are living in a white supremacist society. (The term “white supremacy” sure underwent a massive redefinition in 2016, didn’t it?)

But if this is really a “white supremacist” society, why would white supremacists have to conceal their identities by calling themselves something else?

Duh.

Why would being a genuine white supremacist be career suicide?

Why would so many millions of nonwhites be clamoring to enter a society allegedly based on racial apartheid?

The other day, Tucker Carlson interviewed a professor from the University of Connecticut who pushes the America-is-a-white-supremacist-society theme. He asked how that can be reconciled with the massive demographic change since 1965: with only 12% of 60 million new immigrants being from Europe, are we really witnessing a white supremacist system in action?

Yes, we are, the professor replied, nonsensically.

By that reasoning, reversing this nonwhite immigration would harm the cause of white supremacy, and that would make Donald Trump a major foe of white supremacy.

The professor didn’t follow his reasoning down that road.

Unfortunately, these people are so irrational and bizarre that I can’t parody them. So they’re taking the one fun, redeeming quality the left once had — susceptibility to satire — and ruining it for me.

The whole thing reminds me of one of the great characters in all of literature: Wonko the Sane, from Douglas Adams’ Hitchhiker’s Guide to the Galaxy series. Wonko feared for the world’s sanity, so he referred to the entire world (except his residence) as Inside the Asylum and his own residence as Outside the Asylum.

If you’d like to step Outside the Asylum for a bit, join me as a supporting listener of the Tom Woods Show and among all the other goodies, we’ll welcome you into the Tom Woods Show Elite — my private group that’s as far Outside the Asylum as you can get.

The way forward:

http://www.SupportingListeners.com

Download Audio
30 Jan 03:20

Exposing The First Birther

by tonyheller

Vicious rumors about Barack Obama’s birthplace were started by racists wanting to impugn Mr. Obama’s integrity. The first one of these evil people was this man in 1991, who curiously has the same name and face as Barack Obama. I have nothing good to say about the people behind the vicious “Born in Kenya” rumors.

Born in Kenya and raised in Indonesia and Hawaii’

Seven years later, Mr. Barack Obama continued his efforts to undermine Mr. Barack Obama.

Client List

Kenya’s largest newspaper joined in these efforts to smear our first congenitally lying future president.

Kenyan-born Obama all set for US Senate

This evil birther who shared Barack Obama’s name and face, continued to spread the birther lie right up until three weeks before he announced his candidacy for president.

Dystel & Goderich Literary Management :: Client List

But just in the nick of time he stopped spreading these rumors which would have kept our first communist president out of office.

Dystel & Goderich Literary Management :: Client List

Obama declares he’s running for president – CNN.com

Sadly though, African newspapers continued to try to smear our first anti-American president right up until election day.

Nigerian Observer Online Edition

As Gavin Schmidt would say, if you don’t like the facts in Africa – simply change them.

30 Jan 03:06

10 apps everyone should have on their computer

by Avery Hartmans

Woman using laptop

Mobile apps are far more popular than their desktop counterparts, but most people still rely on laptop or desktop computers, either for work or just browsing the web at home.

Whether you've invested in Apple's Mac line or a Windows PC, there are absolutely some worthwhile desktop apps out there to get more out of your computer.

Here are the five apps — for both Windows and Mac — you should download.

SEE ALSO: The best new apps and updates you may have missed this month

For Mac



Skitch lets you quickly and easily annotate images

Skitch is a Mac app that lets you draw on and annotate images. Screenshot an image or upload a photo from your computer, add arrows, text, or symbols, then export and share. 

Skitch is free in the Mac App Store



Giphy Capture helps you turn any video into a GIF

In the past, creating GIFs used to be a multi-step process or, at the very least, required using some very suspect-looking apps. Giphy Capture not only makes that process quicker, but it also has an intuitive interface that anyone could figure out: You can capture, edit, and upload GIFs with just a few clicks. 

Giphy Capture is free in the Mac App Store.



See the rest of the story at Business Insider
29 Jan 23:49

Things Every Hacker Once Knew

by Eric Raymond

As promised in the comments on my last post, here it is:

Things Every Hacker Once Knew

Comments and corrections welcome.

29 Jan 23:44

The Video I Uploaded to My Old 406 Subscriber YouTube Channel

I used to post a lot of gaming videos on this account, but I stopped using it when I joined Steemit. Today I decided to make a video both explaining why I haven't posted, and advertising Steemit. Here is the video

https://www.youtube.com/watch?v=hown-iTsLF4


Thanks for reading this! I hope a few of my subscribers decide to join because of this video. See you later!

Also remember to check for: My weekly 7 post, [Something else will eventually go into this space]!