Reuters is reporting on supposed ‘anarchist groups’ planning to interfere with Donald Trump’s inauguration ceremony on January 20th. This falls in line with the popularly-accepted (and completely wrong) definition of anarchism as orderless, violent chaos. To the typical westerner, an ‘anarchist’ is the kind of person who incites riots, breaks windows, attacks police, and is an all-around violent troublemaker who doesn’t respect any rules, boundaries, or private property. But such people are not anarchists at all; they are anti-capitalist left-wing radicals who just call themselves anarchists.
Anarcho-capitalism is sometimes labeled as ‘right-wing anarchy,’ but this is a mischaracterization. Someone may have socially-libertine cultural views, or they may have socially-conservative/traditional cultural views, but so long as they don’t seek to use force to make other people behave in their preferred fashion, their political philosophy can’t be labeled either left-wing or right-wing. There is really no such thing as left-wing anarchy or right-wing anarchy. Rioters, violent protesters and other such rabblerousers are phony anarchists.
Anarchy literally means ‘without a ruler,’ as in the absence of the state. It doesn’t mean violence, chaos, disorder, or a lack of standards. A principled anarchist doesn’t use aggression to get what he wants, because if he does so he is setting himself up as the same kind of ruler he opposes. An anarchist respects liberty, order, property, and the rights of free enterprise. The point is precisely that human society and the free market create order without central planning. State intervention is intrinsically a distortion of this order, and is therefore both anti-capitalist and anti-order.
People who go around destroying private property, inflicting violence, and interfering in other peoples’ peaceful activity are the exact oppose of anarchists; they are would-be politicians. Statist politicians (and the voters who contract them) are those who assume for themselves the right to force other people to conform to their will. By definition, anarchy entails the absence of such people in positions of arbitrary power. Anarchy entails a societal structure where individuals, groups, families, churches, businesses, trade organizations, social clubs and others are able to go about their lives in whatever way they see fit, so long as they don’t initiate aggression against anyone else. And for those who do initiate aggression, solutions for dealing with that behavior will be devised by entrepreneurs (as well as by voluntary social groups and associations, like churches).
A libertarian Christian concept of anarchy (and specifically anarcho-capitalism) takes these facts a step further by placing them within the context of the ethical-theological: man actually does have one rightful ruler (God), and the way in which God exercises his rule in the world is through delegated natural social and market institutions: family, church, business, etc. The state, which is neither a natural social institution nor a market creation, is a sinful aberration which distorts God’s blueprint for the world by usurping his role and forcefully manipulating society and market. Anarcho-capitalism is part of God’s blueprint for an orderly and prosperous society absent this sinful aberration.
When violent rioters stir up chaos, they are acting as polyarchists. They want to make up their own rules and push them on others. They want to force other people to bend to their will. As such, they are simply wannabe politicians. They are rebels without applause.
FOR ADDITIONAL READING:
On how the state distorts order and weakens society’s natural institutions such as family, church and businesses, see Robert Nisbet’s The Quest for Community (1953). An alternate name for the same book is Community and Power.
On how the state is the principal engine of de-civilization, and how society naturally orders itself in the state’s absence, see Hans Hoppe’s Democracy: the God that Failed (2001).
On a sweeping general analysis of how a stateless society is fully-capitalist and embraces order rather than eschews it, see Murray Rothbard’s For a New Liberty (1973).
According to President Trump’s White House website:
For too long, Americans have been forced to accept trade deals that put the interests of insiders and the Washington elite over the hard-working men and women of this country. As a result, blue-collar towns and cities have watched their factories close and good-paying jobs move overseas, while Americans face a mounting trade deficit and a devastated manufacturing base.
By fighting for fair but tough trade deals, we can bring jobs back to America’s shores, increase wages, and support U.S. manufacturing.
Here’s a reality check (and fact check) for Team Trump, based on the conclusions summarized below from the research article “The Myth and Reality of Manufacturing in America” by Michael J. Hicks and Srikant Devaraj at Ball State University (emphasis added):
Manufacturing has continued to grow, and the sector itself remains a large, important, and growing sector of the U.S. economy. Employment in manufacturing has stagnated for some time, primarily due to growth in productivity of manufacturing production processes. Three factors have contributed to changes in manufacturing employment in recent years: a) Productivity, b) Trade, and c) Domestic demand. Overwhelmingly, the largest impact is productivity. Almost 88% of job losses in manufacturing in recent years can be attributable to productivity growth (see chart above), and the long-term changes to manufacturing employment are mostly linked to the productivity of American factories. Growing demand for manufacturing goods in the U.S. has offset some of those job losses, but the effect is modest, accounting for a 1.2% increase in jobs beyond what we would expect if consumer demand for domestically manufactured goods was flat (see chart).
Exports lead to higher levels of domestic production and employment, while imports reduce domestic production and employment. The difference between these, or net exports, has been negative since 1980, and has contributed to roughly 13.4% of job losses in the U.S. in the last decade (see chart). Our estimate is almost exactly that reported by the more respected research centers in the nation. Manufacturing production remains robust. Productivity growth is the largest contributor to job displacement over the past several decades.
Bottom Line: The great majority of US manufacturing job losses (88% according to this study) are a direct result of increased productivity of America’s factory workers, not because our jobs have moved overseas. And those factory jobs lost to increased productivity are never, ever coming back to America, unless of course Team Trump can somehow negotiate deals with the powerful forces of technological progress to slow the steady march of advances in labor-saving manufacturing technologies and “Make America Less Productive Again.”
If this happens, the line on the above charts is going even lower, especially as compared on a historical basis:
Donald Trump is ready to take an ax to government spending. Staffers for the Trump transition team have been meeting with career staff at the White House ahead of Friday’s presidential inauguration to outline their plans for shrinking the federal bureaucracy, The Hill has learned.
The changes they propose are dramatic.
The departments of Commerce and Energy would see major reductions in funding, with programs under their jurisdiction either being eliminated or transferred to other agencies. The departments of Transportation, Justice and State would see significant cuts and program eliminations. The Corporation for Public Broadcasting would be privatized, while the National Endowment for the Arts and National Endowment for the Humanities would be eliminated entirely. Overall, the blueprint being used by Trump’s team would reduce federal spending by $10.5 trillion over 10 years.
Dramatic indeed. And if this happens, you wouldn’t even need dynamic scoring to keep the budget from bleeding red ink from deep tax cuts. But should it happen is another question, especially versus reforming middle-class entitlements.
The post Just how much budget cutting are Republicans going to do? Maybe quite a bit. appeared first on AEI.
‘You should, in science, believe logic and arguments, carefully drawn, and not authorities.’
– Richard Feynman
NOAA thermometers show little warming in the US since 1895, and the hottest decade was the 1930’s. But NOAA doesn’t report actual measured temperatures to the public, rather they report temperatures which have been tampered with by a small handful of employees, as seen in the red line below.
The hockey stick of data tampering is seen below. NOAA creates 1.5F warming by simply altering the data.
Most of this hockey stick has occurred since 1970, and is due to (quite literally) fake data. NOAA is losing lots of monthly station data. If a station doesn’t report in a particular month, NOAA simply makes the data up for that station. In the 1970’s about 10% of the data was fake, but now more than 40% of NOAA temperatures are fake.
Since the cold 1970’s, thermometers show the US warming about 1.2F. But NOAA’s fake data has warmed nearly three degrees. And fake data now comprises nearly half of the US temperature record.
The fake data completely corrupts the US temperature record.
But the story gets even uglier. If we remove the fake data and only look at the adjustments to actual thermometer data, even more nefarious activity is exposed. NOAA massively cooled the hot years of the 1920’s through the 1950’s to make the heat disappear, with the maximum data tampering occurring around 1940.
Government climate scientists plotted to remove the 1940’s warmth, and discussed it in E-mails.
Their data tampering allows NOAA to create red hot graphs like this one, in an effort to make people believe the US is heating out of control.
But 2016 wasn’t a hot year in the US. The area of the US affected by very hot weather was near a record low.
Eighty years ago, 105F temperatures were widespread throughout the US, except for New England and Florida.
But now, only a tiny percentage of the US reaches 105F.
NOAA, like almost every other government agency, has been corrupted by the Obama White House. In seven days we take our country back.
Hello everyone, this is the last part of my Art of War review. It has been extremely fun, and enjoyable to write this series. I can't believe it's almost over. Anyway, let's get into the review. (Please note that this was one of the toughest chapters to summarize as most of the topics are hard to connect with real life because they are mostly used in war).
[Image Source: pixabay.com, License: CCO Public Domain]
Art of War original text
The Use of Spies
1. Sun Tzu said: Raising a host of a hundred thousand men and marching them great distances entails a heavy loss on the people and a drain on the resources of the State. The daily expenditure will amount to a thousand ounces of silver. There will be commotion at home and abroad, and men will drop down exhausted on the highways. As many as seven hundred thousand families will be impeded in their labor.
launching a large campaign is hard on both you and your supporters. It is expensive and exhausting. (Real world example: For one company to buy another puts a lot of strain on the employees, the executives, and the shareholders).
2. Hostile armies may face each other for years, striving for the victory which is decided in a single day. This being so, to remain in ignorance of the enemy's condition simply because one grudges the outlay of a hundred ounces of silver in honors and emoluments, is the height of inhumanity.
Anyone who wants to be successful in a field should understand all aspects of it, even the aspects they don't appreciate. This way, they can feel sympathy for those who work in those fields, and be prepared for anything. (Real world example: A musician who wants to be successful should learn aspects of the business which are not related to music what so ever [Intellectual property, marketing, editing, etc. . .]).
3. One who acts thus is no leader of men, no present help to his sovereign , no master of victory.
If you're not preparing yourself, you will experience failure, and the disappointment of your supporters. (Real world example: The cousin in this story).
4. Thus, what enables the wise sovereign and the good general to strike and conquer, and achieve things beyond the reach of ordinary men, is foreknowledge.
Therefore, success is caused by preparation.
5. Now this foreknowledge cannot be elicited from spirits; it cannot be obtained inductively from experience, nor by any deductive calculation.
6. Knowledge of the enemy's dispositions can only be obtained from other men.
To be successful, you have to study the circumstances of your situation. You cannot improvise based on passed experience, or mathematical models. (Real world example: Just because you have an A in your history class doesn't mean you shouldn't study).
7. Hence the use of spies, of whom there are five classes: (1) local spies; (2) inward spies; (3) converted spies; (4) doomed spies; (5) surviving spies.
This is why you gather information, of which there are five methods of gathering: (1)Gathering information about your surroundings; (2) Getting information about your competitor; (3) Gaining access to your competitor's plans and information; (4) Releasing information in order to mislead your competitor; (5) Working with someone who used to work for your competitor.
8. When these five kinds of spy are all at work, none can discover the secret system. This is called "divine manipulation of the threads." It is the sovereign's most precious faculty.
When these kinds of programs are in play, you will gain more information than your competitor. (Real world example: In football, (1) using the defense's lineup to decide what play to run. (2) Linemen on the offense, yelling "Gap" to inform that there is a gap the running back could use, or yelling "blitz" if a linebacker is blitzing towards a certain area. (3) Using a syncopated snap count to get the defense to jump off sides (4) A linemen yelling a play that is not really being run to trick the defense into thinking that that play is being used. (5) A football player changes schools, and tells his new team the old team's play book).
9. Having local spies means employing the services of the inhabitants of a district.
Gathering information about your surroundings. (Real world example: See number 8).
10. Having inward spies, making use of officials of the enemy.
Getting information about your competitor. (Real world example: See number 8).
11. Having converted spies, getting hold of the enemy's spies and using them for our own purposes.
Gaining access to your competitor's plans and information. (Real world example: See number 8).
12. Having doomed spies, doing certain things openly for purposes of deception, and allowing our spies to know of them and report them to the enemy.
Releasing information in order to mislead your competitor. (Real world example: See number 8).
13. Surviving spies, finally, are those who bring back news from the enemy's camp.
Working with someone who used to work for your competitor. (Real world example: See number 8).
14. Hence it is that with none in the whole army are more intimate relations to be maintained than with spies. None should be more liberally rewarded. In no other business should greater secrecy be preserved.
The people needed for information gathering should be highly honored and well rewarded. (Real world example: Usually the winning team of the super bowl gives the quarter back the MVP award because they have to decide on every single play which play to use, and learn to read the defense and trust their team).
15. Spies cannot be usefully employed without a certain intuitive sagacity.
16. They cannot be properly managed without benevolence and straightforwardness.
To gather information you need to have sound judgement, good intentions, and straightforwardness. (Real world example: Joe Kenda when he was a detective, always used sound judgement, and benevolence when he gathered information before arresting suspects. He was fair, and waited until enough evidence was gathered to prove a suspects involvement in a crime).
17. Without subtle ingenuity of mind, one cannot make certain the truth of their reports.
Without being able to decide things quietly, you cannot decipher the truth. (Real world example: A jury needs to think quietly when deliberating a court case).
18. Be subtle! Be subtle! and use your spies for every kind of business.
Quietly gather information in every situation you are in. (Real world example: A cop must do this, to make sure that the suspect they arrest is actually guilty).
19. If a secret piece of news is divulged by a spy before the time is ripe, he must be put to death together with the man whom the secret was told.
Don't use information that does not apply to your situation. (Real world example: If in Math class you know George Washington was the first president, but you don't know what 2+2= you don't put George Washington was the first president as the answer).
20. Whether the object be to crush an army, to storm a city, or to assassinate an individual, it is always necessary to begin by finding out the names of the attendants, the aides-de-camp, and door-keepers and sentries of the general in command. Our spies must be commissioned to ascertain these.
Figure out who is making the decisions for your competitors before you worry about what the decisions are. (Real world example: If you are a UFC fighter and you know who your opponent's trainer is, you can predict what general moves and style they will use).
21. The enemy's spies who have come to spy on us must be sought out, tempted with bribes, led away and comfortably housed. Thus they will become converted spies and available for our service.
You must seek people who are gathering information about you, and try to convince them to gather information about your competitor for you.
22. It is through the information brought by the converted spy that we are able to acquire and employ local and inward spies.
Using information about your enemy that you gathered by gaining access to their plans, you can figure out the situation, and learn more about your competitor through your surroundings.
23. It is owing to his information, again, that we can cause the doomed spy to carry false tidings to the enemy.
You conform your false information to be consistent with what you know about your competitor. (Real world example: If the defense in football lines up like they are expecting a pass, the offensive lineman can shout a passing play "by accident.")
24. Lastly, it is by his information that the surviving spy can be used on appointed occasions.
You can then use the information given to the enemy to formulate a plan based on the information insiders have given you.
25. The end and aim of spying in all its five varieties is knowledge of the enemy; and this knowledge can only be derived, in the first instance, from the converted spy. Hence it is essential that the converted spy be treated with the utmost liberality.
The goal is to gather information in all of these circumstances, so treat those who accomplish this goal with respect.
26. Of old, the rise of the Yin dynasty was due to I Chih who had served under the Hsia. Likewise, the rise of the Chou dynasty was due to Lu Ya had served under the Yin.
Yep what he said ^
27. Hence it is only the enlightened ruler and the wise general who will use the highest intelligence of the army for purposes of spying, and thereby they achieve the great results. Spies are a most important element in water, because on them depends an army's ability to move.
Use those who you trust, to gather information, and mislead your competitors. This will advance you closer towards your goals.
Thanks for reading this! Tomorrow I will be uploading every single part combined into one article so you don't have to click different links to read through the whole series. Please let me know if you enjoyed this series, and let me know if the idea I stated in yesterday's conclusion is good for my next (summer) series. As always, feedback is appreciated, and remember to check back later!
Also remember to check for: My weekly 7 post, and My Art of War Review
There are seven major multi-trillion dollar markets which Bitcoin can compete with in the future.
by Judith Curry
Some reflections on my transition from academic climate research to private sector weather forecasting and regional climate change assessments.
I’ve received thousand of emails, tweets, etc. regarding my retirement from Georgia Tech. Two messages motivated the theme for this post:
Congratulations on your emancipation, and I have no doubt that you will enjoy and thrive in a skin-in-the-game world!
Good luck with your new venture, in the real world where what we pay for must work as advertised. Change from alternative reality!
Skin in the game
So, what is ‘skin in the game’?
Definition #1: If someone is said to have some “skin in the game”, it usually means that person has something to lose (whether it’s some form of ownership, money, property, or just respect) in a given situation – and that situation is usually something business related. So, the “game” in the phrase just refers a situation where there is something that has some risk involved, where people can potentially lose something.
Definition #2: To be at risk financially because you have invested in something that you want to happen. You take more ownership of something when you have some skin in the game.
Nassim Taleb has a new book in the works: Skin in the game: the thrills and logic of risk taking. Based on the snippets that I’ve seen, I don’t quite ‘get it’, but the title of one of his chapters is The skin of others in your game, which is a theme I want to exploit here.
Weather forecasts versus climate predictions
If you are a weather forecaster in the private sector, you will quickly lose your clients if your forecasts are consistently wrong. Daily forecasts are evaluated daily; seasonal forecasts are evaluated several times each year. Clearly weather forecasters have skin in the game in terms of their forecasts.
With regards to climate projections, the predictions being made now will be irrelevant in 2100, which is their target prediction date. In fact, the forecasts become obsolete every 5 years or so, as new model versions are implemented. Recent attempts to evaluate climate model projections in CMIP5 during the early 21st century have shown striking discrepancies between model projections and observations.
Defenders of the climate models and climate model projections argue that climate models shouldn’t be expected to verify on decadal time scales.
In other words, climate modelers have no skin in the game in terms of losing something if their forecasts turn out to be wrong. In fact, there is actually a perversion of skin in the game, whereby scientists are rewarded (professional recognition, grants, etc.) if they make alarming predictions (even if they are easily shown not to comport with observations).
Skin in the game: forecast uncertainty and confidence
I would like to start this section with an excerpt from my 2013 House testimony:
By engaging with decision makers in both the private and public sector on issues related to weather and seasonal climate variability through my company CFAN, my perspective on uncertainty and confidence in context of prediction, and how to convey this, has utterly and irreversibly changed. I have learned about the complexity of different decisions that depend, at least in part, on weather and climate information. I have learned the importance of careful determination and conveyance of the uncertainty associated with a forecast, and the added challenges associated with predicting extreme events. Confidence in a particular probabilistic forecast is determined by consistency of consecutive forecasts, and historical evaluation of forecast accuracy and errors under similar conditions. I have also learned how different types of decision makers make use of forecast uncertainty and confidence information.
I have found that the worst forecast outcome is a forecast issued with a high level of confidence that turns out to be wrong; a close second is missing the possibility of an extreme event.
In other words, my work with CFAN epitomizes a ‘skin in the game’ environment. Which is diametrically opposed to the ‘skin of others in your game’ approach by academics to climate change.
Working in an environment with skin in the game dramatically changed my perspective on uncertainty and confidence. This perspective (derived from my skin in the game work with CFAN) is very clear in my discussion of climate science, uncertainty, and overconfidence by the IPCC [https://judithcurry.com/category/uncertainty/]
This perspective may be unique in the academic climate community; I don’t know of any other climate scientists with skin in the game in private sector weather forecasting.
Climate prognostications: the skin of others in your game
When I saw the phrase ‘skin of others in your game,’ it immediately struck me as describing the situation of climate scientists/activists. Whereby they make dramatic prognostications of dire consequences, and advocate for emissions reductions with their attendant economic costs and detrimental effects of developing countries.
The scientists themselves have absolutely no skin in the game, other than the perversions associated with being professionally rewarded for making alarming predictions and claiming that anyone who doesn’t agree with them is a science ‘denier.’
Who among the climate activists would redirect their research funding and travel funding to efforts to reduce emissions? Who among the climate scientists/activists are making a serious effort to reduce their carbon footprint, e.g. air travel? [see Walking the climate talk]. They have no skin at all in the game. This becomes truly perverse when they insist on the skin of others in their game.
I said in my post JC in transition that I thought that the private sector is a more ‘honest’ place for a scientist than academia. In this context, in the private sector you have skin in the game with regards to weather forecasts (and shorter term climate forecasts), whereas in academia scientists have no skin in the game in terms of the climate change projections.
Making shorter term weather or climate forecasts, with some skin in the game, would be very good experience for academic climate scientists. And this experience just might end up changing their perspectives on uncertainty and forecast confidence . . . to something that sounds like my perspective.
After a years-long dispute over a 2012 paper which suggested there might be some effects of first-person shooter video games on players, the journal has retracted the paper. The stated reason in the notice: Some outside researchers spotted irregularities in the data, and contacted the corresponding author’s institution, Ohio State University, in 2015. Since the original […]
The post “Boom, headshot!” Disputed video game paper retracted appeared first on Retraction Watch.
Last March, a PhD student at Harvard filed a misconduct allegation against his mentor, a prominent stem cell researcher. Three months later, he was taken from his home by police in the middle of the night for a forced psychiatric evaluation. How did this happen? As with any story, there are at least two sides. […]
The post A Harvard whistleblower was subjected to a forced mental exam. Here’s why. appeared first on Retraction Watch.
I always look forward to the edge question of the year. I've only found time to read a few of them so far.
Richard Dawkins' “meme” became a meme, known far beyond the scientific conversation in which it was coined. It’s one of a handful of scientific ideas that have entered the general culture, helping to clarify and inspire.
The Edge 20th Anniversary Annual Question
"WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN?"
Of course, not everyone likes the idea of spreading scientific understanding. Remember what the Bishop of Birmingham’s wife is reputed to have said about Darwin’s claim that human beings are descended from monkeys: "My dear, let us hope it is not true, but, if it is true, let us hope it will not become generally known."
Of all the scientific terms or concepts that ought to be more widely known to help to clarify and inspire science-minded thinking in the general culture, none are more important than “science” itself.
Many people, even many scientists, have traditionally had a narrow view of science as controlled, replicated experiments performed in the laboratory—and as consisting quintessentially of physics, chemistry, and molecular biology. The essence of science is conveyed by its Latin etymology: scientia, meaning knowledge. The scientific method is simply that body of practices best suited for obtaining reliable knowledge. The practices vary among fields: the controlled laboratory experiment is possible in molecular biology, physics, and chemistry, but it is either impossible, immoral, or illegal in many other fields customarily considered sciences, including all of the historical sciences: astronomy, epidemiology, evolutionary biology, most of the earth sciences, and paleontology. If the scientific method can be defined as those practices best suited for obtaining knowledge in a particular field, then science itself is simply the body of knowledge obtained by those practices.
Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. Not just the broad observation-based and statistical methods of the historical sciences but also detailed techniques of the conventional sciences (such as genetics and molecular biology and animal behavior) are proving essential for tackling problems in the social sciences. Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.
It is in this spirit of scientia that Edge, on the occasion of its 20th anniversary, is pleased to present the Edge Annual Question 2017. Happy New Year!
—John Brockman, Editor, January 1, 2017
[206 contributors; 143,000 words:] Scott Aaronson, Anthony Aguirre, Adam Alter, Ross Anderson, Samuel Arbesman, Simon Baron-Cohen, Lisa Feldman Barrett, Thomas Bass, Nicolas Baumard, Gregory Benford, Jeremy Bernstein, Laura Betzig, Susan Blackmore, Giulio Boccaletti, Ian Bogost, Joshua Bongard, Raphael Bousso, Stewart Brand, David M. Buss, Jimena Canales, Nicholas Carr, Sean Carroll, Leo Chalupa, Ashvin Chhabra, Jaeweon Cho, Nicholas A. Christakis, Brian Christian, David Christian, George Church, Andy Clark, Gregory Cochran, Jerry A. Coyne, Helena Cronin, David Dalrymple, Richard Dawkins, Aubrey de Grey, Luca De Biase, Sarah Demers, Daniel C. Dennett, Emanuel Derman, David DeSteno, Diana Deutsch, Keith Devlin, Jared Diamond, Rolf Dobelli, Scott Draves, George Dyson, Nick Enfield, Brian Eno, Juan Enriquez, Nancy Etcoff, Dylan Evans, Daniel Everett, Christine Finn, Stuart Firestein, Helen Fisher, Tecumseh Fitch, Jessica Flack, Steve Fuller, Howard Gardner, Michael Gazzaniga, James Geary, Amanda Gefter, Neil Gershenfeld, Gerd Gigerenzer, Bruno Giussani, Nigel Goldenfeld, Dan Goleman, Beatrice Golomb, Alison Gopnik, Kurt Gray, Tom Griffiths, June Gruber, Hans Halvorson, Sam Harris, Cesar Hidalgo, Roger Highfield, W. Daniel Hillis, Michael Hochberg, Donald Hoffman, Jim Holt, Bruce Hood, Daniel Hook, John Horgan, Sabine Hossenfelder, Nicholas Humphrey, Joichi Ito, Nina Jablonski, Jennifer Jacquet, Matthew O. Jackson, Kate Jeffery, Koo Jeong A, Gordon Kane, Stuart Kauffman, Kevin Kelly, Katherine Kinzler, Gary Klein, Jon Kleinberg, Brian Knutson, Bart Kosko, Stephen Kosslyn, Kai Krause, Lawrence Krauss, Coco Krumme, Robert Kurzban, Peter Lee, Cristine Legare, Martin Lercher, Margaret Levi, Janna Levin, Daniel Lieberman, Matthew Lieberman, Andre Linde, Antony Garrett Lisi, Mario Livio, Seth Lloyd, Tania Lombrozo, Jonathan B. Losos, Ziyad Marar, John Markoff, Chiara Marletto, Barnaby Marsh, Abigail Marsh, Ursula Martin, John C. Mather, Ian McEwan, Hugo Mercier, Yuri Milner, Read Montague, Richard Muller, Priyamvada Natarajan, John Naughton, Rebecca Newberger Goldstein, Richard Nisbett, Tor Nørretranders, Michael Norton, Peter Norvig, Hans Ulrich Obrist, James J. O'Donnell, Steve Omohundro, Bruce Parker, Irene Pepperberg, Clifford Pickover, Steven Pinker, David Pizarro, Robert Plomin, Ernst Pöppel, William Poundstone, Robert Provine, Richard Prum, Matthew Putman, Steven Quartz, David Queller, Sheizaf Rafaeli, Lisa Randall, Abbas Raza, Azra Raza, Martin Rees, Diana Reiss, Siobhan Roberts, Daniel Rockmore, Andrés Roemer, Phil Rosenzweig, Carlo Rovelli, David Rowan, Doulgas Rushkoff, Paul Saffo, Eduardo Salcedo-Albarán, Buddhini Samarasinghe, Robert Sapolsky, Roger Schank, Maximilian Schich, Laurence C. Smith, Simone Schnall, Bruce Schneier, Oliver Scott Curry, Gino Segre, Charles Seife, Terrence J. Sejnowski, Eldar Shafir, Michael Shermer, Seth Shostak, Gerald Smallberg, Lee Smolin, Dan Sperber, Paul Steinhardt, Victoria Stodden, Rory Sutherland, Melanie Swan, Tim Taylor, Max Tegmark, Richard Thaler, Frank Tipler, John Tooby, Eric Topol, Barbara Tversky, Athena Vouloumanos, Adam Waytz, Eric Weinstein, Linda Wilbrecht, Frank Wilczek, Jason Wilkes, Elizabeth Wrigley-Field, Victoria Wyatt, Itai Yanai, Dustin Yellin
Pre-order. Release date: February 7, 2017
140,000 words • 584 pages
Dr. Roy Spencer forecasts that RSS TLT data will be altered :
“I expect there will soon be a revised TLT product from RSS which shows enhanced warming, too.
Here’s what I’m predicting:
1) neither John Christy nor I will be asked to review the paper
2) it will quickly sail through peer review (our UAH V6 paper is still not in print nearly 1 year after submission)
3) it will have many authors, including climate model people and the usual model pundits (e.g. Santer), which will supposedly lend legitimacy to the new data adjustments.
Let’s see how many of my 3 predictions come true.
Prosecutor: Now, when we think of a mile, we don’t think that that’s very far. But this is someone that didn’t have a car. This was an eight-year-old small child who was told that he had to walk all the way home in the winter at night.
Officer Doty: At night, anywhere, in my opinion, is just dangerous for a child, for even an adult to walk home that late. Even 8:00 o’clock doesn’t seem that late, but it just is. Anything can happen at any time. You can get hit by a car, or somebody can walk up behind you, you know, and steal your purse or try and kidnap you. Things like that. It does happen.
There are always infinitely small risks out there day or night…. My job is not to put them in a bubble and shield them from all possible hypothetical dangers 24/7.
Mike Tang: You’re saying this dangerous intersection has no crosswalks whatsoever?Officer Doty: No, it does.Mike Tang: It does have crosswalks?
Officer Doty: Yes. What we said was, if he continued down east, there’s no crosswalk farther down.Mike Tang: But you also said he knew the way home, correct?
Officer Doty: YesMike Tang: So if he knew the way home, he would not walk in an improper direction; would you agree with that?Prosecutor: Objection, speculation.Judge Randall Taylor: Yeah, that would be speculation, Mr. Tang.
Officer Doty: In my opinion, if I was in your shoes, I wouldn’t have left my child there. I have a 20-year-old daughter that I would not let her walk home.
There are millions of truck drivers in the US today. In particular, there are about 1.7 million tractor-trailer (human) drivers. There are many more professional truck drivers in the US, but tractor-trailer drivers are the “archetypal truck drivers”.
In 2016, we had several demonstrations of self-driving tractor-trailer trucks. A self-driving tractor-trailer truck delivered beer for Uber in October.
Putting two and two together, many people are worried that we are soon going to face a wave of technology-related unemployment. You can’t, realistically, retrain over a million tractor-trailer drivers into Facebook engineers.
However, when jobs disappear slowly, it does not always translate into unemployment. For example, people tend to leave naturally an industry even when they still have a job. This occurs through retirements and deaths, but also in various other ways. Maybe someone gets tired of the long hours involved when driving trucks, and they go for an office job. Some people decide to start a small business. And so forth. Moreover, when the job prospects are poor, fewer people tend to enter an industry. There may be no need to fire anyone.
This kind of attrition is likely happening to journalism today. In the past, every town had its independent newspaper. Newspapers were our primary information source about the world. Today, fewer and fewer of us read newspapers and those who do tend to be older. The bulk of the journalism jobs were never at the New York Times, they were always in a myriad of small papers. These small papers are mostly obsolete. Many prospered through classified ads, but newspapers classified ads are a relic of the past. The few small newspapers that remain can be run with a much smaller crew. So how did the workforce fare? We went from half a million (in 1990) to less than half that (180,000) in 2016. Of course, over the same period of time, about 200,000 new jobs were created in Internet publishing, a closely related industry. So while there are fewer journalists today, it took 16 years to see the disappearance of 2/3 of all jobs… and the Internet created many related new jobs. All in all, it was not a good time to be a journalist, but it did not create a national emergency.
Journalism has been something of an extreme case. The Internet really disrupted journalism. If we look at other industries, like the automotive industry, we see much more modest job losses (like 20% over 15 years). Such modest job losses are still enough to create some pain, but most employees did fine.
There are many more truck drivers than there ever were journalists. It is almost an order of magnitude in difference. And while journalists can find new jobs in the vast fields of communications and entertainment… I don’t think anyone would know what to make of a million unemployed truck drivers.
Yet, even if, eventually, all tractor-trailer drivers are computers, it does not follow that we will experience techno-unemployment in the sense that those in the profession will see their careers destroyed. We need a sudden fall in the number of jobs.
Calum Chace wrote an excellent book “The Economic Singularity: Artificial intelligence and the death of capitalism“. As the title of his book suggests, Calum predicts some major disruption.
In particular, Calum predicted the following in an email to me:
By 2030, there will be fewer than 250,000 tractor-trailing drivers in the US, according to official government statistics.
The year 2030 is 13 years away… To fall down to 250,000 drivers from 1.7 million in 13 years, we would need to experience a 13% decline every year until then… It is the equivalent of the disappearance of 85% of all jobs… something much more severe than what journalism underwent. Something much, much more severe than what the automotive industry underwent.
I have decided to place a bet against Calum. He wins if during any year, up till the year 2030 (inclusively), the number of drivers fall under 250,000. I win otherwise. We each put $100 on the bet, with the money going to a charity chosen by the winner.
Though a nice theory, techno-unemployment is not currently occuring, despite all the technological progress we see. It is an idea, but not necessarily something that we can observe in the real world.
I think that there are a few good reasons for why that is.
So even if we can build and sell perfectly safe self-driving tractor-trailer trucks today… one should not underestimate the difficulties that we might have to deploy this technology. In fact, I can imagine a scenario where 2030 come and pass, we still do not see self-driving tractor-trailer trucks on the roads as a matter of course.
Did you know that New Jersey and Oregon, two otherwise reasonable American states, ban self-service gas stations? Yet self-service gas stations were aggressively introduced back in the 1970s. How long do you think it will be before all American states allow self-driving tractor-trailer trucks?
As far as I can tell, subways worldwide could easily be automated. It is simply not that hard to automate a subway train. Yet, according to Wikipedia, there are only two automated train networks in Canada.
It is entirely possible that automation will be used mostly to grow the industry. There could still be a million truck drivers by 2030, but maybe they will focus on the less frequent routes while the most intense routes are automated.
Generally speaking, my view is that we are not going to be able to automate fast enough. In most high-income countries, we have aging populations. We are going to run out of nurses willing to work for wages people can afford. I am hoping that health care will improve at an accelerated rate so that we can start getting the diseases of aging under control in 20 years… but even if we do, we simply won’t have an ample supply of new eager employees.
My thesis is that techno-unemployment should be the last of our worries. Soon enough, we will live in countries where a quarter of the population is at an age where we expect people to be retired. It is already the case in Japan.
On the one hand, this should please people who fear overpopulation, as it means decreasing population (people over the age of 65 tends to have few children). On the other hand, this means few young new employees. We will need all the robots we can get.
We need self-driving trucks because we are going to run out of people to drive them.
Supercomputers are ultrahigh-performance machines that crunch numbers at dazzling speeds in order to advance research in aerospace, climate science, fundamental physics and cosmology, genomics, and medicine — not to mention more mundane applications like big data and financial analysis.
Supercomputing research in the U.S. received a boost with the National Strategic Computing Initiative (NSCI), introduced in 2015 to advance “exascale” supercomputers capable of performing a quintillion operations a second (30 times faster than today’s fastest computer), or one exaflop, by 2025. Of course other countries, notably including China, have similarly well-funded plans.
A supercomputer is usually thought of as a big machine that fills a whole warehouse and consumes as much power as a small town. There is much truth in the perception of supercomputers as expensive toys that only major governments and global corporations can afford, but the emerging paradigm of distributed supercomputing could make ultrahigh-performance computing pervasive and affordable. The idea of pay-as-you-go, massively distributed cloud supercomputing, once a science-fictional trope (see for example Greg Egan’s cult novel “Permutation City”), is gaining ground — and the blockchain technology that powers Bitcoin could have an important role to play.
Massively distributed supercomputing is suitable for those demanding computing problems that are amenable to massive parallelization — splitting a large problem into many smaller tasks that can be farmed out to a distributed network. But there are ways to “parallelize” many computing tasks that, at a first glance, don’t seem suitable for massively parallel computing.
One of the first successful examples of massively distributed supercomputing was SETI@home, a crowdsourced citizen science project started in 1999 to find candidate signals from alien intelligences. The SETI@home system sends millions of chunks of data (“work units”) to the participants, to be analyzed by their home computers. The SETI@home software processes the data on the participants’ computers in the idle times used by screensavers, and sends the analysis results back. Initially powered by custom software, SETI@home was later ported to the BOINC (Berkeley Open Infrastructure for Network Computing) platform, which powers other similar projects including the vLHC@home citizen science project for particle physics, managed by CERN.
Volunteer citizen scientists contribute their own computing resources to BOINC projects for karma points and the satisfaction of contributing to important causes. But it seems likely that the new crowdsourced supercomputing paradigm can achieve critical mass only once viable ways to financially reward the participants are introduced.
Enter blockchain technology. It’s worth noting that Bitcoin provided a spectacular demonstration that the general masses will flock to the weirdest ideas proposed by scientists and social activists if they can make money in the process.
TechCrunch ran a story on the potential of distributed ledger technology to usher in a new era of massively distributed computing with next-generation platforms that enable participants to lend and borrow computing resources — and make money in the process.
The story mentions several innovative blockchain companies, including Golem, a Polish company that is developing an “Airbnb for computers” able to “reduce costs and increase speed in domains such as scientific research, machine learning and graphics rendering, while making it possible for anyone with an average or better computer to share resources and make a side income,” with an Ethereum-based transaction system to clear payments between buyers, providers and software developers. Golem’s plans have found favor among investors, resulting in the third-largest platform ICO (Initial Coin Offering) ever, which raised $8.6 million in minutes.
Besides processing payments to computing power providers, distributed ledgers can allocate work units based on criteria that include not only price, but also reputation, geographical location and special requirements that can be encapsulated in smart contracts.
“A distributed network of computers managed by blockchain and smart contracts can create a shared economy where anyone with a computer can borrow idle computing power and make a side income,” notes TechCrunch columnist Ben Dickson. “The peer-to-peer nature of the blockchain and distributed ledgers will also help move computation closer to where the data is being generated, and avoid bottleneck round-trips to cloud servers.”
The latter consideration seems especially important. “The biggest problem for computation overall is that the devices generating data are not located close-by to the data centers that perform the analytics,” said David Sønstebø, co-founder of IOTA, a distributed ledger for Internet of Things (IoT) applications.
“Because the blockchain allows for a decentralized infrastructure, it can bring the data closer to their producers and consumers,” added Gilles Fedak, co-founder of iEx.ec, a distributed cloud computing platform, “whereas with centralized cloud computing, data-centers tend to be located in remote areas.”
In fact, while contributors of computing power are unlikely to make more than pocket change by offering their spare computing power, blockchain-based massively distributed cloud supercomputing could create interesting opportunities for operators based close to the data sources — for example, close to where IoT devices collect data for, say, environmental analysis — and able to make an initial investment.
The post Blockchain Technology to Power Next-Generation Distributed Supercomputers appeared first on Bitcoin Magazine.
Today, IBM Watson Health has announced a collaborative project with the U.S. Food and Drug Administration aimed at applying emerging blockchain technology applications toward the advancement of public health.
Through the use of blockchain technology, this project will explore efficient and secure ways for consumers to share their data and make better, more informed healthcare choices. This two-year research agreement will allow IBM and the FDA to explore ways of exchanging patient-level data from a number of different sources, including electronic medical records, clinical trials and genomic tests, as well as information trails from mobile devices and wearables. This process is currently limited by concerns about data integrity, security and breaches that persist with the existing system.
The initial area of focus for this project will target oncology-related data, with IBM Watson Health and the FDA planning to share their initial findings later in 2017.
The key element fueling this push is the IBM Watson Health platform. Launched in April of 2015 as the first commercially available cognitive computing system delivered through the cloud, this system will play a key role in analyzing high volumes of data, sorting through complex questions and proposing evidence-based answers.
Dr. Shahram Ebadollahi, Vice President of Innovations and Chief Science Officer of IBM Watson Health's business unit, told Bitcoin Magazine that the FDA partnership will be aimed at leveraging blockchain technology using IBM’s deep domain expertise in healthcare combined with efforts ensuing from its partnership with the Linux Foundation's Hyperledger project.
“We plan to examine how this technology may offer a framework that could one day result in a secure patient/consumer exchange of health data for various purposes. Achieving this will require us to address a number of issues like privacy, interoperability and the issue of trust. That’s no small task.”
The ultimate intent of these efforts is to provide patients with a mechanism for securely sharing their health data for research purposes across the healthcare spectrum while creating opportunities for broader improvement to the existing, much-maligned healthcare delivery model.
In a recent IBM Institute for Business Value paper “Healthcare Rallies for Blockchain,” the result of a survey of about 200 healthcare executives, more than seven in ten industry leaders believe that the highest return on investment for blockchain technology involves the management of clinical trial records, medical/health information and regulatory compliance. Through the examination of the blockchain model, IBM and the FDA will look to see if it can deliver value to public health efforts through information exchange across a wide variety of data types, including clinical trials and “real world” evidence data.
Ebadollahi noted that this data is currently entrenched in myriad silos that restrict its purposeful use. It’s here, he says, that the blockchain’s distributed ledger capabilities can play an important role in facilitating the audit trail of information, thereby freeing it for greater utilization.
“The underlying proposition we’re seeking to address is how to place the health data in the hands of the consumer for use with their consent. Given the volume and variety of data we are dealing with here, we believe that blockchain [technology] can potentially make a meaningful impact without the need for a middleman or central clearinghouse.”
In terms of his long-term hope for this project, Ebadollahi said, “As growing numbers of potential applications become evident, IBM hopes to refine technological approaches toward building a scalable and decentralized infrastructure for more widespread use. I believe this involves placing consumers and patients at the center of how this data is going to be used and shared. Ultimately this is the key.”
The post IBM Watson Health and FDA to Collaborate on Blockchain Project appeared first on Bitcoin Magazine.
"Steemit, creators of the world’s fastest growing decentralized social media platform, has unveiled today what it describes as a “comprehensive roadmap” detailing improvements to its blockchain, company structure and website. As part of its plans Steemit is to migrate its servers this first quarter to Amazon Web Services (AWS), the world's largest hosting provider." -Source: Forbes
Link to full article: http://www.forbes.com/sites/rogeraitken/2017/01/17/fintech-steemit-migrating-servers-to-amazon-unveils-blockchain-roadmap/#2f641bcb6b47
We have updated the Climate Bet chart with the December 2016 global temperature anomaly data from UAH. (Click on the small chart to the right for a more detailed image.) 2016 was a warm El Niño year, but ended with a sharply cooler month at 0.24°C; somewhat closer to Professor Armstrong’s no-change forecast of 0.159°C than to Mr Gore’s IPCC dangerous warming trend figure for December 2016 of 0.443°C.
With the data in for 9 of the Climate Bet’s 10 years, the cumulative absolute error of the dangerous warming trend that the IPCC and Mr Gore warned that we should expect is nearly 23% greater than the error of the scientific no-change forecast that is the basis of Professor Armstrong’s bet. The no-change forecast has been more accurate in 78 of the 108 months of The Bet to-date.
Despite 30 months of The Bet in which the warming trend was more accurate, the cumulative error of the Gore/IPPC dangerous warming projection has been larger than Armstrong/no-trend forecast for all but two months of the bet so far.
“He who controls the past controls the future. He who controls the present controls the past.”
― George Orwell, 1984
In 1999, Dr. James Hansen at NASA was troubled that the US wasn’t warming, and his fake global surface temperatures were.
Hansen even complained that the US was cooling.
This was Hansen’s US temperature graph in 1999.
So Jim and his buddies at NOAA got together and simply altered the US data, thus creating a fake warming trend. Problem solved!
Essentially all US warming is due to data tampering.
However, Hansen still had a serious problem. As of the year 2000, his global temperatures didn’t show enough warming to scare anyone. Hansen only showed about 0.5C warming since the start of records.
So Hansen and his buddy Gavin Schmidt altered the global temperature record too, and turned 0.5C warming into 1.4C warming.
Another impressive hockey stick of data tampering.
And what about the hockey stick of warming since 2000? Satellites show that is also fake.
Even Michael Mann admits that Gavin’s hockey stick of warming since 2000 is fake.
Both the US and global surface temperature records are fake. The people behind this need to be shut down. Their fake data has wrecked science, cost the global economy trillions, and hurt countless poor people.
As was the case 110 years ago, it is up to the independent press (i.e.bloggers) to call out this misuse of meteorological data, which is a “crime against the community.”
by Judith Curry
Effective January 1, I have resigned my tenured faculty position at Georgia Tech.
Before reflecting on a range of things, let me start by answering a question that may have popped into your head: I have no plans to join the Trump administration (ha ha).
Technically, my resignation is a retirement event, since I am on the Georgia State Teachers Retirement System, and I need to retire from Georgia Tech to get my pension (although I am a few years shy of 65). I have requested Emeritus status.
So, I have retired from Georgia Tech, and I have no intention of seeking another academic or administrative position in a university or government agency. However, I most certainly am not retiring from professional life.
Why did I resign my tenured faculty position?
I’m ‘cashing out’ with 186 published journal articles and two books. The superficial reason is that I want to do other things, and no longer need my university salary. This opens up an opportunity for Georgia Tech to make a new hire (see advert).
The deeper reasons have to do with my growing disenchantment with universities, the academic field of climate science and scientists.
I’ve been in school since I was 5 years old. Until a few years ago, I regarded a tenured faculty position at a major university to be a dream job, and I couldn’t imagine doing anything else.
Apart from my own personal career trajectory and the ‘shocks’ that started in 2005 with our hurricanes and global warming paper, and the massive spike in 2009/2010 from Climategate, I’ve found that universities have changed substantially over the past 5-10 years.
At first, I thought the changes I saw at Georgia Tech were due to a change in the higher administration (President, Provost, etc). The academic nirvana under the prior Georgia Tech administration of Wayne Clough, Jean-Lou Chameau and Gary Schuster was a hard act to follow. But then I started to realize that academia and universities nationwide were undergoing substantial changes. I came across a recent article that expresses part of what is wrong: Universities are becoming like mechanical nightingales.
The reward system that is in place for university faculty members is becoming increasingly counterproductive to actually educating students to be able to think and cope in the real world, and in expanding the frontiers of knowledge in a meaningful way (at least in certain fields that are publicly relevant such as climate change). I’ve written on these topics before, I won’t belabor this here.
So why not try to change the system from the inside? Well, this is not the battle I want to fight, apart from any realistic assessment of being able to shift the ponderous beast from within.
Or maybe it’s just a case of ‘wrong trousers’ as far as I’m concerned. Simply, universities no longer feel like the ‘real deal’ to me (note: this criticism is not targeted at Georgia Tech, which is better than most). It’s time for me to leave the ivory tower.
A deciding factor was that I no longer know what to say to students and postdocs regarding how to navigate the CRAZINESS in the field of climate science. Research and other professional activities are professionally rewarded only if they are channeled in certain directions approved by a politicized academic establishment — funding, ease of getting your papers published, getting hired in prestigious positions, appointments to prestigious committees and boards, professional recognition, etc.
How young scientists are to navigate all this is beyond me, and it often becomes a battle of scientific integrity versus career suicide (I have worked through these issues with a number of skeptical young scientists).
Let me relate an interaction that I had with a postdoc about a month ago. She wanted to meet me, as an avid reader of my blog. She works in a field that is certainly relevant to climate science, but she doesn’t identify as a climate scientist. She says she gets questioned all the time about global warming issues, and doesn’t know what to say, since topics like attribution, etc. are not topics that she explores as a scientist. WOW, a scientist that knows the difference! I advised her to keep her head down and keep doing the research that she thinks interesting and important, and to stay out of the climate debate UNLESS she decides to dig in and pursue it intellectually. Personal opinions about the science and political opinions about policies that are sort of related to your research expertise are just that – personal and political opinions. Selling such opinions as contributing to a scientific consensus is very much worse than a joke.
Stepping back from all this, I reminded myself that I was a tenured faculty member – in principle I could do whatever I wanted. The intellectual pursuits that now interest me are:
When I first started down this new path in 2010, I published papers that could be categorized as applied philosophy of science (e.g. uncertainty monster, etc). This seemed to be a path towards maintaining academic ‘legitimacy’ in light of my new interests, but frankly I got bored with playing the game. Why go to the extra effort to publish papers, wrestling with reviewers who (usually) know less than you do about your topic (not to mention their biases), having to pay to get an article published some months in the future, so that maybe 100 people will read it? Not to mention the broader issues related to coping with the university bureaucracy, government funding, etc.
Once you detach from the academic mindset, publishing on the internet makes much more sense, and the peer review you can get on a technical blog is much more extensive. But peer review is not really the point; provoking people to think in new ways about something is really the point. In other words, science as process, rather than a collection of decreed ‘truths.’
At this point, I figure that I can reach more people (including students and young researchers) via social media. Do I pretend to have any answers to all this? No, but I hope I am provoking students and scientists to think outside of their little bubble.
The real world
So my fall from the ivory tower that started in 2005 is now complete [link to my 2006 AGU presentation agu_integrityofscience_curry] .
I am interested in figuring out new and better ways to apply weather and climate data, weather forecast information and future regional climate scenarios to supporting real world decision making to manage risks and associated with weather and climate variability.
I became interested in such applications over a decade ago, and Peter Webster and I founded a company Climate Forecast Applications Network (CFAN) to do just that. If you haven’t checked out our website (ever or even recently), check it out – cfanclimate.net – I spent my entire winter break revising the website using some good suggestions from Larry Kummer of Fabius Maximus fame.
CFAN started as a university start-up company in 2006, and didn’t have any full time employees until a few years go. We now employ 7 Ph.D. scientists (in addition to myself and Peter), plus software engineers, etc. With my retirement from Georgia Tech, we are spinning up the company into a new phase to explore new forecast product developments and decision support tools, new markets, new partnerships, new regions.
So far, most of CFAN’s revenue comes from the ‘weather’ side (days to seasons), with a few projects on developing future climate scenarios (I wrote about a current project here Generating regional scenarios of climate change).
I find all this tremendously interesting, challenging and rewarding. Not to mention enormously time consuming (CFAN needs to make more money so that we can hire more people to take some of the load off myself and the other managers, all of whom are wearing too many hats). I am learning a huge amount about decision support, management, marketing and sales, finance, etc.
At this point, the private sector seems like a more ‘honest’ place for a scientist working in a politicized field than universities or government labs — at least when you are your own boss.
So, where does all this leave my endeavors with social media (including Climate Etc.?) Resigning my faculty position and taking on a full time plus position in running CFAN actually means less time for blogging, rather than more (at least in the near term).
I remain very interested in the interactions afforded by social media. However, over the past year I have devoted considerably less time to writing original material for Climate Etc. Apart from being really busy, I have been spending more time on twitter (which is a much smaller time investment).
I will be starting a new blog for CFAN, more focused on weather and shorter-term climate issues (I will cross post any relevant posts at Climate Etc.)
I will also try to write more frequent but shorter posts at Climate Etc., with short excerpts and brief comments on some of the articles that I am tweeting about. I will be relying on guest bloggers to provide more technical analyses. So I definitely intend to keep the blog going, but in context of managing a very busy schedule.
We’ll see how all this plays out, but I figured I’ve earned the right to explore and do what I want. This is my definition of academic freedom (and I’m not asking anyone else to pay for it).
In The Better Angels of Our Nature, Steven Pinker convincingly makes the point that by pretty much every measure you can think of, violence has declined on our planet over the long term. More generally, "the world continues to improve in just about every way." He's right, but there are two important caveats.
One, he is talking about the long term. The trend lines are uniformly positive across the centuries and mostly positive across the decades, but go up and down year to year. While this is an important development for our species, most of us care about changes year to year -- and we can't make any predictions about whether this year will be better or worse than last year in any individual measurement.
The second caveat is both more subtle and more important. In 2013, I wrote about how technology empowers attackers. By this measure, the world is getting more dangerous:
Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious... and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.
This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.
Pinker's trends are based both on increased societal morality and better technology, and both are based on averages: the average person with the average technology. My increased attack capability trend is based on those two trends as well, but on the outliers: the most extreme person with the most extreme technology. Pinker's trends are noisy, but over the long term they're strongly linear. Mine seem to be exponential.
When Pinker expresses optimism that the overall trends he identifies will continue into the future, he's making a bet. He's betting that his trend lines and my trend lines won't cross. That is, that our society's gradual improvement in overall morality will continue to outpace the potentially exponentially increasing ability of the extreme few to destroy everything. I am less optimistic:
But the problem isn't that these security measures won't work -- even as they shred our freedoms and liberties -- it's that no security is perfect.
Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We'll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.
As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn't kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?
Clearly we're not at the point yet where any of these disaster scenarios have come to pass, and Pinker rightly expresses skepticism when he says that historical doomsday scenarios have so far never come to pass. But that's the thing about exponential curves; it's hard to predict the future from the past. So either I have discovered a fundamental problem with any intelligent individualistic species and have therefore explained the Fermi Paradox, or there is some other factor in play that will ensure that the two trend lines won't cross.