Shared posts

24 Oct 15:44

Our Two Parties Shift Their Positions A Lot

by admin

From an interview of Political scientist Steven Teles by Megan McArdle:

In political science we often model political actors as having fixed interests and positions, and then we try to figure out how they do or don't get their way. But there's actually more play in the joints of politics than that. Some people -- like Ronald Reagan! -- just switch teams entirely. More broadly, as we address in the book, entire parties switch their positions. If we want to understand politics, we need some way of understanding that process.

As I grow older, and have had more time to observe, I find the shifts in party positions fascinating and oddly opaque to most folks who are in the middle of them - perhaps this is one advantage to being part of neither major party.   Some of the shifts are generational -- for example both parties have moved left on things like homosexuality and narcotics legalization.   Some of the shifts have to do with who controls the White House -- the party in power tends to support executive power and military interventionism, while the opposition tends to oppose these things.   Some of the shifts have to do with who controls intellectual institutions like college in the media -- the group in control of these institutions tends to be more open to first amendment restrictions, while the out-of-power group become desperate defenders of free speech (look how the campus free speech movement has shifted from the Left to the Right).

I would love to see a book on this covering the last 50 years.

24 Oct 16:01

Asking the Wrong Question

by admin

Apparently a chunk of what looks like manufactured aluminum was dug up years ago in Romania and was dated at up to 250,000 years old.  By this dating -- given the technology required to make aluminum -- it would be unlikely to be man-made.

So of course everyone is focusing on the question of whether it is an alien artifact.  Which is the wrong question.  A rational person should be asking, "what is it about this particular metallurgy or the way in which it was buried that is fooling our tests into thinking that a relatively new object is actually hundreds of thousands of years old?"  I would need to see folks struggle unsuccessfully with this question for quite a while before I would ever use the word "alien."  I am particularly suspicious of tests that have an error bar running between 400 years and 250,000 years.  That kind of error range is really close to saying "we have no idea."

Postscript:  The article hypothesizes that it looks like an axe head.  Right.  Aliens find some way to fly across light-years, defying much of what we understand about physics, and then walk out of their unimaginably advanced spacecraft carrying an axe to chop some wood, when the head immediately goes flying off the handle and has to be left behind as trash.

27 Oct 18:47

FDA Bureaucracy Blocks ‘Miracle Drug’ from 7-Year-Old

by TAC Daily Updates

by Mark Flatten, Goldwater Institute

How do you tell a 7-year-old child she must go back on a feeding tube? Cassie Le is facing that terrible question.

Her daughter spent the first seven months of her life unable to eat without vomiting. The doctors ran the usual tests and tried the usual medications. Nothing worked.

The girl was put on a feeding tube when she was 7 months old. It helped some, but she continued to be sick regularly.

Finally, doctors tried domperidone, a medication routinely used by millions of people throughout the world to treat gastric conditions like those afflicting Le’s daughter. It worked. The feeding tube was removed about four years ago, and since then the girl has been able to eat normally.

That is coming to an end because of new rules restricting the availability of domperidone imposed by the federal Food and Drug Administration, which never approved its use in the United States. So people like Le’s daughter can no longer legally get it.

“I fear she will need to get a feeding tube placed back in,” Le says. “How do I tell her that? How would you as a parent tell your child … ‘sorry the medicine that has helped you to eat and enjoy eating is no longer available so now you can’t eat … you will get fed to your intestine by a feeding tube.’ She is a little girl who has worked extremely hard to overcome her fears of eating and now her joy of being a normal little girl will be taken away.”

Learn more about how federal red tape is blocking access to this potentially life-saving treatment for Cassie Le and thousands of other Americans in a new investigative report by the Goldwater Institute, Sickening: FDA Bureaucracy Blocks Common “Miracle Drug.” Please click here to read the full report.

26 Oct 21:08

Breakthrough Listen Project Funded by Zuckerberg to Search for Aliens Around Weird Star

If aliens live around Tabby's star, Breakthrough Listen Initiative, a project funded by physicist Stephen Hawking, Facebook CEO Mark Zuckerberg and Russian entrepreneur Yuri Milner, will try to find them.
27 Oct 20:48

Andreas M. Antonopoulos talking about STEEM & Steemit ! One of the most brilliant mind on blockchain space..


Didn't pay attention to most of it, but the 4 minute video at the bottom is interesting.

Bitcoin, security, entrepreneur, coder, hacker, author, humanist, pacifist. Working on cryptocurrencies, author of Mastering Bitcoin & The Internet of Money, **Andreas Antonopoulos** is for me one of the most brilliant mind and public speaker on the Blockchain, Bitcoin and cryptocurrency space. *Consultant on several bitcoin-related startups and permanent host of the Let’s Talk Bitcoin podcast. He served as head of the Bitcoin Foundation's anti-poverty committee until 2014 then joined as Chief Security Officer and after that, Advisor of the board. On October 8, 2014, Antonopoulos spoke in front of the Banking, Trade and Commerce committee of the Senate of Canada to address the senators' questions on how to regulate bitcoin in Canada [(read more on his Wikipedia)](* I can tell that this guy gives the most compelling analysis on the power of Bitcoin, blockchain, and cryptocurrencies, can be very persuasive with strong arguments, deep-understanding of the issues and clear vision for the future. If you don't him just watch these few videos : ## "Blockchain" or Bitcoin: Understanding the differences > In this talk, Andreas explores the rise of the term "blockchain" as a counterweight to bitcoin. The term blockchain does not provide a definition, as it has been diluted to be meaningless. Saying "blockchain" simply invites questions, such as "what is the consensus algorithm". Meanwhile, bitcoin continues to offer an alternative to the traditional financial system. Andreas looks at the value of private ledgers, which he sees as having a small impact on finance, versus open, global and accessible payment and currency systems such as bitcoin which he sees as fostering a global revolution in finance and access to financial tools. ## The Future of Cryptocurrencies > In this talk, from Texas Bitcoin Conference in 2014, Andreas explores the rise of cryptocurrencies and share his vision for the future of cryptocurrency. *** # In this talk, Andreas answers a public question **"What's your opinion on STEEM ?"** ## Bitcoin Q&A: Steemit, Yours Network, and future social platforms Keywords/Phrases : > *I get suspicious when I hear the sales pitch that I "will make thousands and thousands of dollars very quickly and easily." I don't like that it revolves around making money fast instead of focusing on the quality of content or building a community organically. There is a set of parameters combining social media, micropayments, behavioural incentives and disincentives, that would make for a very interesting platform. Using money not as a system of remuneration but of quality discovery backed by game-theoretical models at scale; wisdom of the crowd + micropayments + behavioural science = fewer trolls. This is an emergent area. Social media platforms require a very high critical density of adoption before they're effective, where you know enough people who have access to the underlying infrastructure and are willing to engage in it. Bootstrapping social media and digital currencies, where you have a very limited community, is going to be difficult for the next decade as it experiments, fails, and builds momentum.* I do believe the Steem blockchain is taking the good direction, but we must draw the necessary conclusions from the Beta, and the inflation debate, advertising/pay-for-attention strategy, marketing strategy & business development, to figure out what is the best way to bring new users, work on user retention, to the mass adoption.. Now that Steem value is almost back to the start, A new history begins to be written. *** *** Bonus : More interesting content from Andreas * [Video : "Consensus Algorithms, Blockchain Technology and Bitcoin" ]( * [Youtube channel ]( * [Twitter](
27 Oct 20:42

Steemit Photo Challenge # 15 Droplets of rain caught.

After the rain. The droplets are caught. PB0301376f1b2.jpg OLYMPUS IMAGING CORP. D595Z,C500Z 10/400s ƒ/2.8 ISO50 6.3mm For full size click on the photo Enjoy watching! #photography #steemitphotochallenge #leylar-photo
27 Oct 15:44

Read Uber's entire 98-page plan to make flying cars a reality in the next decade

by Avery Hartmans

Uber VTOL helipad

Uber has a new plan for making commuting faster: flying cars. 

The company debuted its new project for electric aircraft that takes off and lands vertically in a white paper it published Thursday. Dubbed Uber Elevate, the project aims to have the aircraft in cities by the year 2026. 

The aircraft would be used to shorten commute times in busy cities, without the noise and pollution of helicopters. The vehicles would be able to travel at about 150 mph for up to 100 miles and carry multiple people, including a pilot, according to a piece about the project from Wired's Alex Davies. While the first vehicles will be ready by 2021, the expected roll-out date is 2026.

Uber doesn't plan to make their own vehicles, but will instead partner with other companies and the government to make it happen. 

Uber's plans for the project are published in a 98-page document that outlines the feasibility of bringing the planes to market, how the vehicles will work from a technical standpoint, and how Uber plans to work within other constrains like weather conditions and government regulations.

You can read it in full below:



SEE ALSO: Uber's 'Elevate' project aims to bring flying electric cars to cities by 2026

Join the conversation about this story »

NOW WATCH: This flying car is real and it can fly 430 miles on a full tank

21 Sep 17:07

Can a Rubik’s Cube Teach You Programming?

by Peter Denning

Emö Rubik invented the Rubik’s Cube in 1974 and it became the world’s most popular puzzle. The cube consists of 26 cubelets that move and turn when the faces are twisted. This cube (pictured above) is in a solved position when each face is a uniform color. The goal is to take a randomized cube though a series of face twists to transform it into the solved position. Learning to solve a Rubik’s Cube can teach us something about learning to program.

Programming has always been seen as a skill in addition to a thinking process. But what exactly does it mean when we say programming is a skill? How is this a useful insight?

Here is the usual story about skills. A skill is an ability developed over time with practice that became embodied and automatic. We do not need to think about a skill; we just do it. We can perform at different levels of competence including beginner, advanced beginner, competent, proficient, expert, and master. This idea of levels of competence can be seen in many places—for example the five levels of airline pilot certification, or the achievement levels in many online games. It takes time, practice, commitment, and experience to progress through the levels: from hesitant, mistake-prone, rule-based behaviors as a beginner to fully embodied, intuitive, and game-changing behaviors as a master. Most skills are supported by sensibilities, or being able to sense moods, emotions, and how others will react. A skill is both individual and social—standards for levels of competence are set by a community. A skill is not the application of knowledge; you cannot become good at something simply by reading books, watching videos, or listening to lectures. You need practice until you are good at it. When you embody a skill, you perform it without having to think much about it; your body automatically behaves in a skillful way.

Unfortunately, this story does not help the many novice programmers who are among my students. They already know they must practice a lot, but they get frustrated by the number of mistakes they make and discouraged by the time it takes to find and correct mistakes. They know expert programmers can help them, but are often put off when the experts tell them what to do rather than let them discover for themselves. In short, the usual skill story may be correct, but it does not help with basic concerns among beginning programmers.

Let me tell this story from another angle and see if we can learn anything about learning to program well.


Recently I discovered a decaying business card in a hidden compartment of my wallet. It was a crib sheet I wrote in 1983 to help me remember how to solve the Rubik’s Cube, which was very popular at the time. I put down the Cube for the last time in 1984 after mastering it because it was no longer a challenge. I never thought about it again. But the crib sheet rekindled my interest. I found my old Cube and stared at it and the crib sheet, only to discover that I remembered absolutely nothing at all about the strategy of solving the Cube or what the individual formulas on my crib sheet meant. To resurrect my defunct skill I decided I needed to relearn the solution method from scratch.

To get started, I searched the Web for solution methods. I found many. Some are easy to remember; others are complex and used only by speed experts. In 1983 there was no Web to search and no books to read. It was very hard to find anyone who had a method, and I did not have the patience to work out my own method. Given the Cube’s novelty, most of the 1983 solvers were mathematicians who used group theory to find solutions. Each configuration of the cube was an element of the group, and each twist operation permuted some of the cubelets. The objective was to find algorithms—each a series of twists of the faces—that had a net effect of repositioning two or three cubelets and leaving everything else untouched. These algorithms could then be used as tools to work a random Cube into a solved Cube.

Over the years, the cubing community evolved many Cube-solving strategies. They did this by trial and error, not by applying group theory. The simplest strategy is to solve the bottom layer, then the middle, and finally the top. Each layer gets a little harder because the algorithms to solve the middle layer cannot affect anything in the bottom layer, and similarly the algorithms to solve the top cannot affect anything in the other two.

In visiting various web pages, I learned cube solvers had developed a standard notation for twists. They had also developed libraries of standard algorithms. The notation and algorithms were not settled in 1983. I learned most of the easier solution strategies relied on about eight algorithms and required about 180 twists to solve a cube, but advanced cubers have evolved larger libraries of algorithms with which they solve cubes in about 60 twists. Also, it always takes at least 20 twists to solve a cube.

There are even speed tournaments for cube solvers. Contestants get a randomized cube and then they solve it as fast as possible while judges measure them with stopwatches. I found demonstrations on YouTube of some advanced cubers doing their work. All I could see is a flurry of fingertip motions over in a jiffy. The world record is just over 5 seconds.  Can you believe that?  Theoretically an optimal solver has to complete 20 twists in 5 seconds to beat that record. An expert solver probably completes up to 8 twists a second, and can finish fast because of a large library of specialized but efficient algorithms.  It’s hard to believe anyone can move that fast, but when you see the videos you know it’s true.

I also discovered some expert cubers had designed their own cubes that moved exceptionally slickly and did not “pop.” If you were too clumsy with an older Cube you could suddenly wedge a face in mid twist and the whole thing pops apart. I went and bought a “tournament Cube” and it was indeed slick and smooth compared with my original Cube.

That is what I learned from my research. Now I had to put it to work with practice.

First I had to select one of many solution strategies and learn its algorithms. This was not as straightforward as I imagined. I chose an eight-algorithm strategy described by Arthur Benjamin, a mathematics professor from Johns Hopkins, who learned it from a world champion cuber. At first I kept trying to relate his eight algorithms to the mysterious notations on my wallet crib sheet. No luck. Waste of time. Finally I decided to pretend I knew nothing, learn the algorithms, and see where I wound up. My first cube solution took me about an hour.  I kept making mistakes and having to start over almost at the beginning. It was frustrating and exhausting. When I got my first solution, I was surprised that it worked at all.

The next day I tried again. I achieved my first solution in about 45 minutes (not as many mistakes) and a second solution in 30 minutes. The overall layer-by-layer strategy started making more sense and I made fewer mistakes.

I practiced the algorithms for about an hour a day. It paid off. On the fifth day I was able to solve a Cube in five minutes. I wrote down the key algorithms I needed on a new crib file card. As long as I could remember when each algorithm should be applied I could solve the cube in five minutes.  Occasionally I made mistakes and had to start over.

On the sixth day I decided to memorize the algorithms. I gave them funny mnemonics such as “the rural algorithm,” “the furl-unfurl algorithm,” “the rouser algorithm,” the “radio frequency algorithm,” and the “fast-fourier algorithm.” (These odd names are plays on the letters F, U, R, L, and B that name all the twists in the algorithms I was using.) Each algorithm had progressively more twists, but with some chunking I was able to correctly remember each one. For example, the rouser algorithm takes eight twists, the radio frequency algorithm 12 twists, and the fast-fourier algorithm 12 twists. At first I had to close my eyes and concentrate so I did not make any mistakes, but soon I developed a kinesthetic sense of how each algorithm “felt.” I could instantly detect if I made a mistake because it felt wrong allowing me to reverse the errant twist. After a short time of practice I no longer had to whisper the steps to myself, as each algorithmic sequence just seemed to flow out.

On the seventh day I tried these algorithms on solved Cubes to see how cubelets were being permuted. I realized that if most of the algorithms are repeated four to six times the cube is restored to its solved position. I was then able to easily figure out how to orient the cube before applying an algorithm, thus shortening my solution time to three minutes.

This was an amazing progression, from a rank beginner to a pretty competent cuber in seven days.


I learned a number of lessons from this—all of which can be applied to becoming and staying competent as a programmer.

I experienced moving from a beginner to advanced beginner to competent. Each stage had its own feel. In the beginner stage, I knew nothing and was easily frustrated that I knew nothing. I kept making mistakes that were very costly; the time it took to recover was significant. My frustration was compounded because I’m an expert in other fields and I am used to being an expert. It had been so long since I was a beginner I had no memory of being a beginner. Even though I had no choice but to be a beginner, I fought it mightily. I only made progress when I finally decided to accept that I was a beginner and knew nothing.

By the third day I had developed a bit of a “feel” for some of the moves and configurations. They looked and felt familiar. It took me less time and I made fewer mistakes. My mood changed from beginner’s frustration to a sense of hope that I could learn this; I had a sense of ambition that I could become competent.

By the sixth day I had a strong “feel” for the moves and could quickly tell by inspection how to orient the cube so that the algorithm required the fewest repetitions. I was developing a mood of confidence that I knew what I was doing.

By the seventh day I paid attention to details I would never have noticed on day one, such as watching how the two advanced algorithms permuted the cubelets. My confidence had grown and I felt I could do well in a tournament of people who could solve in three minutes.

The entire process, from day one to day seven, was also an experience of expanding awareness of the cube and all the social practices (such as community-generated algorithms and tournaments) that had grown up around it.


Seven days of about an hour practice a day moved me from a rank beginner to a competent cuber.  I was not proficient or tournament ready, but I could solve cubes every time without making mistakes. I experienced the growing sense of confidence in my performance ability and the falling away of the frustrations of “beginnerhood.” I had moved from an intellectual experience as a beginner (what rule do I apply next?), to a kinesthetic experience (what does the right algorithm feel like?). I developed a new appreciation for the travails experienced by a beginner and the importance of the wisdom to keep practicing even if you don’t see yourself making progress.

In other words, in seven days I experienced in a microcosm the same thing I had experienced in slow motion when I learned programming.  I now have a much greater appreciation of how a programmer chooses a good algorithm for each step of a design. When I was a beginner, this choosing was purely an intellectual exercise and as I became competent it transformed to a kinesthetic exercise. When we say the good programmers have a good “feel” for designing programs, I see that is literally true.

Although I once had a cuber skill, this whole experience did not feel like relearning a long-lost skill. It felt like learning a new skill from scratch.   What I had really lost, and have now recovered, was a sense of how to be a beginner.

So much programming today is done by reuse—snipping code from other sources or grabbing code segments from libraries. That is just like selecting an algorithm to move the cube one step closer to solution. When you get good at that kind of programming, you have a kinesthetic feel for what code segments will work best. Of course programming is not completely kinesthetic, because you have to think about what you are doing.

Try this yourself. Get a Rubik’s Cube, find a solution strategy, and do this experiment. See what it is like to be a beginner. See how unhelpful it is for an expert to tell you as the beginner how to do things. The best thing an expert can do is guide your beginner practice to find those aha learning moments for yourself. Without the practice and learning moments you will never progress beyond beginner. I speculate the experiment will give you compassion for beginners and restore your memory of what beginnerhood was like. I gained a lot of compassion for myself; at the times when I thrust into having to be a beginner my tendency was fight mightily against being a beginner. I speculate it will give you compassion for yourself too: The next time you find yourself having to be a beginner, you will know what to do.

The post Can a Rubik’s Cube Teach You Programming? appeared first on BLOG@UBIQUITY.

26 Oct 15:14

Weird Science! : Dark Energy and Supernovas! (Dark Energy Overturned?) Dark Energy... That's the theory that most of the mass/energy of the universe is missing right? In the late 1990s a group of scientists examined a large catalog of [Type IA supernovae]( and they found that not only is the Universe expanding, it's doing so at an accelerating rate! This meant that something in the universe was causing the expansion and this was attributed to a heretofore unknown energy source that we have been calling *Dark Energy* every since. It [turns out they were probably wrong](, but the reason why they were wrong is a fascinating story of how real science in action is supposed to work. Since the 1950s, it's been a well known fact that Type IA supernovae are the result of a specific set of processes. These processes always result in a specific admixture of elements as the end result of the life cycle of the parent star. The net result of knowing all of this is that Type IA supernovae always occur with a specific intensity and thus they can serve as a sort of "standard candle". We can then use the perceived luminosity of the event as seen here on earth to determine how far away the event occurred and we can use the redshift to determine how far back in time the event occurred. This was all common knowledge circa 1990 when the lambda value (the density of dark energy in the universe) was derived and we were all suddenly faced with the daunting realization that 80% of our universe is just missing! Now fast forward twenty years. A subtle and unheralded discovery occured that had shocking consequences as I predicted it would. As it turns out, [Type Ia supernovae are NOT all the same]( There are actually different Type IA supernovae possible. These are based on the mass of the star and the metal content of the star (to astronomers anything heavier than helium is a metal). As I said back in 2014 when [Milne et al](; was still in preprint... >These supernovae which we thought were the same are actually different. Low metal stars were much more common earlier in the history of the universe. This "standard candle" is anything but. Be prepared to have a paradigm shift about dark energy, the age of the universe and possibly the big bang itself. In plain English this meant that 20 years of textbooks need to be revised. What we thought were standard candles were nothing of the sort. The universe isn't as big as we thought and it isn't accelerating the way we thought it was. But to push back 20 years of "everybody knows". Would require going back over ALL of the data... It's been two years in the making, but in light of Milne et. al Astrophysicists at the University of Copenhagen have finished their analysis and the results are striking. What they've found is that there is almost no support for the idea of accelerating expansion and even [support for expansion itself has been weakened]( Publishers rejoice! At least 20 years of textbooks that will need to be revised now! This finding has more significance than they state in the article on nature. There are only a handful of possible conditions for the universe. Those conditions are 1. accelerating expansion 2. steady expansion 3. decelerating / breaking expansion 4. contraction 5. steady state They have now effectively ruled out #1 and #2 is not looking so healthy anymore either. This leaves 3, 4 and 5 as the most likely candidates, but 4 and 5 are highly unlikely unless we find out [Zwicky was somehow correct]( about light getting tired. Nevertheless, even if it's only down to 2 or 3, it's HUGELY important that we know which of these is the actual case. It effects lambda (the dark energy constant), but it also effects the gravitational constant. With these values now effectively up in the air, it brings into play many, many Grand Unified Theory candidates that had been tossed on the dust bin of history because they couldn't explain DE. To my mind this also means the [Kaluza-Kline theory]( that unified Electromagnetism and Gravity way back in the 1940s is back in play. This is one of those breathtaking discoveries that very few people are talking about because no one is seriously considering all the repercussions... Yet! Even this discovery by itself is just huge in it's direct implications... _**Fundamentally, most of the universe is no longer missing!**_ There is still likely to be dark energy of some kind, but it's not anywhere near as prevalent as thought and every calculation that took it into account, needs to be revised in light of this new science. This is great science, it's also very courageous to stand up and say "everything everyone knows about this is wrong". But this is solid and they even made the [source code and data]( available for download for free. *further reading* [Milne et. al]( [Nielsen, J. T. et al. Marginal evidence for cosmic acceleration from Type Ia supernovae]( What are your thoughts? Do you feel better knowing that most of the Universe is no longer missing?
25 Oct 05:00

A Retired FBI Agent Addresses James Comey on the Hillary Clinton Investigation

Sir, what possessed you?
25 Oct 13:31

VIDEO: Brennan and Haidt Hayek Lectures

by Matt Zwolinski

While things are a little on the slow side at BHL, here are a couple of videos from Duke University’s Hayek Lecture series. Enjoy!

First, our own Jason Brennan on Markets Without Limits:


And, second, here’s Jonathan Haidt on “Two acred and Incompatible Values in American Universities.”

The post VIDEO: Brennan and Haidt Hayek Lectures appeared first on Bleeding Heart Libertarians.

25 Oct 15:00

Rigorous Intentional Inclusion

by Marisela Martinez-Cola

This guest post is by Marisela Martinez-Cola, a doctoral student in sociology at Emory University and instructor of sociology at Oglethrope University.

“What would you say if I told you I own a gun?”

This is how I began a lecture called “The Soap Operas of Sociology” for my Introduction to Sociology course. After a brief pause, I was met with a variety of responses.  Some students say, “Good for you!” Others look in disbelief.

Then, I ask the following questions: “What if I told you that I participate in sharp shooting competitions? Or that my brother-in-law is a gunsmith and we bond by going to a shooting range? Or that I was a victim of crime and simply felt safer having a gun? Would any of those reasons make you feel less shocked or better about why I own a gun?” Some students nodded yes, while others indicated the opposite.

After some more silence, I ask, “Do you want to know if it’s true?” All heads nod yes. “I’m not going to tell you,” I explain, “Because it doesn’t matter. Whether or not I own a gun has absolutely no bearing in my ability to teach.” At this point they groan in despair because they desperately want to know.

I teach at a small, private university in Atlanta, Georgia that draws a diverse group of students representing different races, classes, genders and (of course) political views. My goal that day was not simply to shock the students but to inform them of the research regarding “viewpoint diversity” and to encourage them to consider their own partiality.

There are a number of controversies in sociology. From p-values to politics, we wrestle with how to study, measure, and deliver findings regarding social phenomena. Our profession requires controversy- in fact we welcome it. However, this open-minded spirit falls short in one very interesting area: politics.

For this lecture, I assign this article from The New York Times featuring Jonathan Haidt and his research on institutional bias against conservatives (Tierney 2011). I also present the economic, political science, and sociological research on the ratios of Democrats to Republicans among faculty at elite universities, non-elite universities, psychology departments, and sociology departments (Cardiff & Klein 2005; Gross & Simmons 2014); partisan discrimination in faculty hiring (Iyengar & Westwood 2015); and awarding of hypothetical scholarships to Republican students (Shields & Dunn 2016). I also assign an interview with sociologist and evangelical Christian George Yancey who writes, “Outside of academia I faced more problems as a black [person],” he explains. “But inside academia I face more problems as a Christian, and it is not even close (Kristof 2016).” Once again, I summarize research regarding whether or not faculty would hire an applicant who identified as evangelical (Yancey 2011).

Haidt, in his interview, describes a graduate student who uses the “coming out of the closet” metaphor to describe what it feels like to identify as a conservative or Republican in academia. I should note that I vehemently disagree with using this analogy, particularly since identifying one’s politics would not result in personal loss of friends, family, funding and even life itself. Nonetheless, I believe the graduate student was referring to the fear of rejection. The story does reveal an interesting aspect of identity politics that sociologist should, at the very least, be interested in studying.

Exploring these issues, for example, would be similar to the shift in race scholarship to not only examine underrepresented populations but also the construction of whiteness. If this is something worthy of study, then we should employ the same rigorous inquiry that we give to the topics that are near and dear to “liberal” sociology.

In my former law career, I learned that every lawyer should understand their own legal argument as well as their opponent’s argument. It is not to say that conservative or evangelical scholars are my opponents. Quite the contrary, I welcome their points of view. It is the same reason I watch both CNN and Fox News.  It is important for me to understand both sides of an issue and the evidence and theory they use to support their arguments.

This, I believe, should also apply to sociological investigations. As a race scholar, I understand that this political divide has always existed. There is W. E. B. DuBois versus Booker T. Washington, Claude Steele versus Shelby Steele, and William Julius Wilson versus, well, everyone else. These political debates help us understand the complexity of race relations in the United States. Corey D. Fields’ (2016) recently published Black Elephants in the Room: The Unexpected Politics of African American Republicans captures this complexity. I am eager to include this research the next time I teach Race & Ethnic Relations because Black Republicans are a group that evokes the most eye rolls, sighs and groans from my students. It is my job to challenge them to explore, not dismiss, groups they do not understand or to whom they have yet to be exposed.

Including political diversity in my syllabus accomplishes three very important objectives.

  • First, it demonstrates my personal and academic commitment to rigorous, intentional inclusivity.
  • Second, it insulates me from allegations of excluding or silencing conservative and evangelical points of view in my classroom.
  • Most importantly, it allows me to model what it means to be a colleague who can, in a civil manner, agree to disagree.

As a result, I encourage my conservative and evangelical students to speak their minds and discuss their understanding of sociological phenomena. In doing so, I explain, they must engage scholarship and one another with civility, evidence and theory. It is what I expect of all my students.  I also provide a link to Heterodox Academy to connect them with the evidence and theory used by like-minded scholars.

So why did I choose guns? Because there are so many ideas of who and what comprise American gun owners.  To illustrate my point, I Googled the phrase “gun owners” and clicked on images.  The images included highly sexualized advertisements of women with guns, political cartoons of men armed to the teeth, and even a man holding a gun in one hand and a baby in the other. I explained that whether or not we want to admit it, there are a number of labels attached to owning a gun. Such labels can cloud our judgment as scholars and impede our ability to study controversial topics or manage uncomfortable findings in an unbiased manner.

Finally, I provide the students an example of research that I believe misses the mark. In Reflexivity and Voice, Charmaz and Mitchell, Jr. (1997) recount Mitchell’s ethnographic study of survivalists in a small Illinois town. I remember being struck by two excerpts on survivalist training. In the first, the researcher describes his interaction with a survivalist who covers himself in mud:

“The rest was transformed into a kind of filth-covered, primitive man-thing. I stared at it. It stared back. The thing fastidiously wiped its hands on a patch of grass, rose from its haunches, stepped around the fire and stood in front of me. It extended one hand and, almost to my surprise, spoke. “Hello,” it said, “I’m Henry [p. 198, emphasis mine].”

In a second account, he describes survivalist attire as “ludicrous costumes” and refers to the subjects as “play-acting weekend warriors.” I wondered if his subjects knew he was dehumanizing them, making fun of them and reducing/caricaturing an activity they obviously enjoy and take seriously. I found it to be extraordinarily disrespectful particularly since this group welcomed him into their world. Charmaz and Mitchell, Jr. explain that this account exemplifies a voice that is “neither neutral nor muted (p. 200).” Because of this bias, I considered what valuable information we missed, information that would have helped us understand this particular group of people.

I explain to my students that it is easy to make fun of survivalists or others with more extreme lifestyles. It is harder to try to understand them and report findings in an honest, nonjudgmental manner. An excellent example of respectfully but critically studying aberrant groups is Matthew Hughey’s (2012) comparative study of white nationalist and antiracist organizations in White Bound: Nationalists, Antiracists, and the Shared Meaning of Race. As sociologists, I explain, our scholarship is stronger when we invite and answer contradictory, unpopular and opposing views or findings. Furthermore, reflexivity requires us to explore our response if a colleague were to disclose, “I’m a Republican” or “I serve my Lord and Savior Jesus Christ” or, in this case, alleged gun ownership.

Excerpt from My Sociology 101 Syllabus on Controversial Issues

It is my goal to create an environment where we can engage in the free exchange of ideas without fear of judgment, harassment, and discrimination. In this class we will be discussing a variety of controversial issues. A good sociologists, however, knows how to discuss these issues with mutual respect, civility, and understanding.

We will spend the first days of class identifying what we need to create an environment where everyone feels comfortable engaging in mutually beneficial and enlightening discussions.  You do not have to agree with everything you hear in class, but you must respect a persons right to their opinion and thoughts. I simply ask that when you express these opinions, you do so with respect. Most issues occur when people share personal stories without prefacing their remarks with a desire to learn. Beginning a remark with, “No offense but…” is usually an indicator that your remark will be completely offensive. If you are unsure how to ask a question and/or discuss an issue, I welcome you to meet with me. I promise you will leave feeling understood, respected, and hopefully having learned a different point of view.

I strive to create an open and welcoming classroom. If I ever miss the mark, please don’t hesitate to come and talk to me. We are all learning together.”

Opinions expressed are those of the author(s). Publication does not imply endorsement by Heterodox Academy or any of its members. We welcome your comments below. Feel free to challenge and disagree, but please try to model the sort of respectful and constructive criticism that makes viewpoint diversity most valuable. Comments that include obscenity or aggression are likely to be deleted.

Works Cited

  • Cardiff, Christopher F. and Daniel B. Klein. 2008. “Faculty partisan affiliations in all disciplines: A voter-registration study.” Critical Review 17(3-4): 237-255.
  • Charmaz, Kathy and Richard G. Mitchell, Jr. 1997. “The Myth of Silent Authorship: Self, Substance, and Style in Ethnographic Writing,” in Reflexivity and Voice edited by Rosanna Hertz. Thousand Oaks, CA: Sage Publications Inc.
  • Fields, Corey D. 2016. Black Elephants in the Room: The Unexpected Politics of African American Republicans. Oakland, CA: University of California Press.
  • Gross, Neil and Solon Simmons. 2014. Professors and Their Politics. Baltimore, MD: Johns Hopkins University Press.
  • Hughey, Matthew. 2012. White Bound: Nationalist, Antiracists and the Shared Meaning of Race. Stanford, CA: Stanford University Press.
  • yengar, Shanto and Sean Westwood. 2015. “Fear and Loathing across Party Lines: New Evidence on Group Polarization.” American Journal of Political Science 59(3): 690-707.
  • Kristof, Nicholas. May 7, 2016. “A Confession of Liberal Intolerance.” New York Times. Retrieved October 16, 2016 ( ).
  • Shields, Jon A. and Joshua M. Dunn, Sr. 2016. Passing on the Right: Conservative Professors in the Progressive University. Oxford: Oxford University Press.
  • Tierney, John. February 8, 2011. “Social Scientist Sees Bias Within.” New York Times. Retrieved October 19, 2016 (
  • Yancey, George. 2011. Compromising Scholarship: Religious and Political Bias in American Higher Education. Waco, TX: Baylor University Press.
25 Oct 18:00

AT&T's new streaming TV service will give you 100+ channels for $35 a month (T)

by Nathan McAlone

AT&T President and CEO Randall Stephenson

AT&T just dropped a bombshell by announcing that its streaming TV streaming package, DirecTV Now, will include more than 100 channels for only $35 a month.

That $35 includes unlimited mobile data for your TV viewing, AT&T CEO Randall Stephenson said Tuesday at The Wall Street Journal's digital conference.

The service will debut in November.

DirecTV Now will be a package of live TV delivered over the internet wherever you are — no cable box or satellite dish necessary.

It will target the 20 million people in the US who don't have pay TV, but the company plans for it to be the primary TV platform by 2020, according to Bloomberg.

DirecTV Now's $35 price point undercuts the early industry norms for live-streaming TV. The market leader Sling TV charges $20 for "25+" channels, and its highest package has about 50 channels for $40. Sony's PlayStation Vue charges $54.99 for about 100 channels, and its lowest package gives you "60+" channels for $39.99 a month. Other competitors including Hulu and YouTube are reportedly readying their own packages for streaming live TV but have yet to name a price.

DirecTV Now seems to be blowing them all out of the water on price, though the full catalog of channels has yet to be announced. It will have channels from Time Warner, NBCUniversal, Fox, Disney, and others. AT&T can afford the low price point because it didn't have to create and service legacy equipment like satellite dishes, Stephenson said.

As is the norm for "over-the-top" services like Netflix or Sling TV, DirecTV Now also won't lock you into an annual contract.

Pay TV as an app

DirecTV Now won't break the mold of pay TV; it will simply make the delivery more fluid and improve on price.

"It's pay TV as an app," AT&T's senior vice president of strategy and business development, Tony Goncalves, told Business Insider in a recent interview.

AT&T sees itself as an "aggregator of aggregators," and its strength will be in the breadth of content it provides (more than 100 channels), as well as in a pain-free technical experience. As a user of Sling TV, I have had many tech issues, and that element should be factored in prominently.

Stephenson also said DirecTV Now would eventually be bolstered by AT&T's 5G network. He presented 5G as an alternative to broadband moving forward.

Time Warner

This announcement comes on the heels of AT&T's proposed $85 billion purchase of Time Warner. The deal, if it goes through, would link AT&T's "pipes" — wireless, broadband, and satellite — to Time Warner's media properties, which include HBO, CNN, and Warner Brothers.

Stephenson said Time Warner channels would be available on DirecTV Now.

For a full overview of AT&T's DirecTV Now strategy, see our interview with Tony Goncalves.

Join the conversation about this story »

NOW WATCH: Why your iPhone alarm snooze is automatically set to 9 minutes

25 Oct 18:25

Overstock's Blockchain Stock Will Begin Trading in December

by Pete Rizzo
Online retail giant Overstock will soon begin trading blockchain-based shares of its stock.


25 Oct 14:27

Washington Post Explains Their Swing To Fascism

by tonyheller

For decades, the Washington Post claimed they supported transparency. Until it started working against their political causes, and they were forced into full Orwellian propaganda mode.


The result of Wikileaks’s new transparency? Less transparency. – The Washington Post

In the Soviet Union, all bad news about the corrupt regime was blamed on US intelligence meddling. Roles have reversed under our first communist president.


24 Oct 21:30

On SteemBots and Voting Errors

# On SteemBots and Voting Errors ## Introduction In [Maslow's Hierarchy of Needs and The SteemBot Revolution](, I began to make an affirmative argument to demonstrate that steembots, including voting bots, contribute to the overall well-being of the steemit ecosystem. This article will continue that argument by considering voting errors and demonstrating that humans and bots complement each others' capabilities in a way that makes the steemit platform better overall. ## Two Types of Voting Errors ![vote]( *Image source:, licensed under CC0, Public Domain* In statistical testing, there are two types of errors: [Type-I and Type-II](https:/ A Type-I error, or false positive, is incorrectly rejecting a true null hypothesis - finding an effect that doesn't exist. A Type-II, or false negative, error is when the null hypothesis is false, but not rejected - an effect exists but isn't found. Likewise, in steemit, two types of voting errors are possible. If we take steemit's null voting proposition to be that an article is not valuable, then if I vote on an article that I wouldn't actually believe is valuable, that is analogous to a false-positive, or Type-I error. In this case, by "finding" value that doesn't exist, an author may wind up receiving an undeserved award, and I may even receive an improper curation reward. On the other hand, if an article is posted that I find to be valuable, and I have sufficient voting strength to vote, but I neglect to vote for it, that is analogous to a false-negative statistical error. This type of error is demoralizing to authors of valuable content. This can be summarized in a table: --- Article Quality --- Voting Decision | Article is *NOT* subjectively valuable to me | Article is subjectively valuable to me ----------------- | --------------------------------------- | ------------------------------------------------- **Vote** | Type-I Voting Error | Voted Correctly **No Vote** | Voted Correctly | Type II Voting Error *\*Note: I don't mention flagging, because I consider it mostly harmful to the platform, and therefore don't use it. But it should be noted that flagging would complicate this analysis.* The Type-I form of error is often discussed, because it is highly visible and vulnerable to abuse. This is what seems to lead many people to dislike bot voting. Type II voting error, however, is an important and infrequently discussed (in my feed, anyway) phenomenon. Any time my voting power is at or near 100%, if a valuable article exists and I am not voting, then I am committing a type-II voting error. In this case, I am foregoing curation awards that I might be earning, and I am missing an opportunity to help others to earn author awards. These missed votes may contribute to author retention problems, limit steemit's growth, and otherwise prevent the steemit community from maximizing its potential. All of these things reduce the value of my own SteemPower (SP) holdings. Additionally, and this is underappreciated, by adding authors of valuable content to the reward pool and thereby diluting payouts, a reduction of Type-II errors will also tend to limit the harm that is done to steemit by Type-I errors. It is my (unsupported) opinion that the vast majority of steemit voting errors are presently of the Type-II variety. ## Humans and Steembots Have Complementary Strengths As a person with a family, a job, other interests, and needs for food and sleep, I am constantly committing Type-II voting errors. It's unavoidable because it is simply not possible for me to manually read and vote for every quality article that's posted on steemit, even at steemit's relatively small beta size. Some quality articles will always get past me. This problem will only be exacerbated as steemit grows. I can reduce my type II errors by reading less carefully, not viewing videos, or even by voting from the thumbnail view, but this reduction comes at the cost of increasing Type-I voting errors. On the other hand, a voting bot doesn't need to eat or sleep, and it can vote whenever it has voting power, 24x7x365. Even a bot won't get every quality article, but it will do much better than I can do. At the moment, the drawbacks to bot voting are that the barriers to entry are significant, and the widely available heuristics for value are low-quality, so the few users who manage to launch bots will commit substantial Type I voting errors - at least initially. It should be noted that the @robotev bot by @cryptos and the @steemvoter bot by @marcgodard both begin to lower barriers to entry with bot voting. I'm sure there are already other tools out there, and even more will emerge. This table summarizes the human and bot relative strengths and weaknesses --- Error Type --- Voter | Type I (False Positive) | Type II (False Negative) ----------- | ------------------------ | -------------------------- Human | Very good at avoiding | Cannot avoid SteemBot | Difficult to avoid (but improving) | (Relatively) Good at avoiding ## Conclusion The intuitive solution to voting bots that vote on low-quality articles and deliver unsupported rewards through Type-I voting errors is to reduce or eliminate the use of bots. However, this action would increase the overall number of Type-II voting errors, which would also be harmful to the steemit platform. The better solution is to increase the quality and number of bots in the ecosystem, thereby reducing the Type-II voting errors and mitigating the harm from Type-I voting errors. Although counterintuitive, this measure should limit the damage that is done by abusive bots, and it should also generate more wide spread author satisfaction. Therefore, as barriers to entry come down and quality goes up, all steemizens would be well served to make use of both bot voting and manual voting, in order to minimize both Type-I and Type-II voting errors. As I noted, [here](, I fully expect a vibrant market to emerge for non-technical users to harness the increasing benefits of voting by steembot. When that happens, all of steemit will benefit. ------ @remlaps is an Information Technology professional with three decades of business experience working with telecommunications and computing technologies. He has a bachelor's degree in mathematics, a master's degree in computer science, and is currently completing a doctoral degree in information technology.
24 Oct 19:21

Astrophysicists Discovered Periodic Spectrum Modulations of 234 Solar-type Stars and Claim that They are Produced by Extraterrestrial Civilizations

The interesting news came from Canadian astrophysicists claiming that they experimentally discovered optical spectrum modulations of stars with the most likely explanation being that these modulations are caused by activities of extraterrestrial civilizations.

The article of Canadian astrophysicists E.F. Borra and E. Trottier (Département de Physique, Université Laval) was published in a prestigious scientific journal “Publications of the Astronomical Society of the Pacific”. The journal is more than simply authoritative in fundamental astronomy and astrophysics with each article heavily reviewed and verified before publication. This means that even if Borra and his co-author did not find any aliens, they certainly found interesting peculiarities of some stars from spectral class range F2-K1 similar to our Sun.

The scientists analyzed roughly 2.5 million spectrums of stars from the Sloan Digital Sky Survey data using Fourier Transform. 234 stars with their spectrum demonstrated periodic modulations with the period of modulation being equal for all 234 stars. At the same time, all unusually behaving stars belong to spectral classes from F2 to K1. This means that the stars are very similar to our sun with its spectral class being G2.

The screenshot is taken from Search for Extraterrestrial Intelligence project SETI@home

The authors of the article claim that the discovered anomalies cannot be caused by hardware errors or mathematical peculiarities of signal processing. Borra and his colleagues believe that the most fitting hypothesis is that we are observing high frequency (about 10-12 times per second) light impulses that are generated by extraterrestrial civilizations.

Perhaps, we in fact observe how developed civilizations communicate to each other!

People with expertise in star spectroscopy or star seismology, I would definitely love to hear your opinions on the matter!

24 Oct 10:02

10/24/16 PHD comic: 'A new method for reviews'

Piled Higher & Deeper by Jorge Cham
Click on the title below to read the comic
title: "A new method for reviews" - originally published 10/24/2016

For the latest news in PHD Comics, CLICK HERE!

24 Oct 14:04

Blockchain gets its first test with international trade

by Jon Fingas

h/t Roumen.ganeff

Some financial gurus are convinced that blockchain (the underlying tech behind bitcoin) is the future of business, and they might already have some proof. The Commonwealth Bank of Australia and Wells Fargo have conducted the first international, inte...
24 Oct 11:00

Statistical Models CANNOT Show Cause, But EVERYBODY Thinks They Can. Hence the Replication CRISIS

by Briggs
From the paper.

From the paper.

Please pass this on to ANY researcher who uses statistics. Pretty please. With sugar on top. Like I say below, it’s far far far far far past time to cease using statistics to “prove” cause. Statistical methods are killing science. Notice the CAPITALIZED words in the title to show how SERIOUS I am.

Statistical models cannot discern or judge cause, but everybody who uses statistics thinks models can. I prove—where by prove I mean prove—this inability in my new book Uncertainty: The Soul of Modeling, Probability & Statistics, but here I hope to demonstrate it to you (or intrigue you) in short form using an example from the paper “Emotional Judges and Unlucky Juveniles” by Ozkan Eren and Naci Mocan at the National Bureau of Economic Research.

Now everybody—rightfully—giggles when show plots of spurious correlations, like those shown at Tyler Virgen’s site. A favorite is per capita cheese consumption and the number of people who died by becoming tangled in their bedsheets. Two entangled lines on a deadly increase! Perhaps even more worrying is the positive correlation between the number of letters in the winning words at the Scripps National Spelling Bee and the number of people killed by venomous spiders. V-e-n-o-m-o-u-s: 8.

All of Virgen’s silly graphs are “statistically significant”; i.e. they evince wee p-values. Therefore, if statistical models show cause, or point to “links”, then all of his graphs must—as in must—warn of real phenomenon. Read that again. And again.

All of Virgen’s correlations would give large Bayes Factors. Therefore, if Bayesian statistical methods show cause, or point to “links”, then all of these graphs must—I must insist on must—prove real links or actual cause.

All of Virgen’s data would even make high-probability predictions, using the kind of predictive statistical methods I recommend, or using any “machine learning” algorithm. Therefore, if predictive or “machine learning” methods show cause, or point to “links”, then all of his graphs must—pay attention to must—prove cause or show real links.

I insist upon: must. If any kind of probability models shows cause or highlights links, then any “significant” finding must prove cause or links. Any data fed into a model which shows significance (or large Bayes factor, or high-probability prediction) must be identifying real causes.

Since that conclusion is true given the premises, and since the conclusion is absurd, there must be something wrong with the premises. And what is wrong is the assumption probability models can identify cause.

There is no way to know using only sets data and a probability model if any cause is present. If you disagree, then you must insist that every one of Virgen’s examples are true causes, previously unknown to science.

Here’s an even better example. Two pieces of data, Q and W, are given and Q modeled on W, or vice versa, give statistical joy, i.e. these two mystery data give wee p-values, large Bayes factors, high-probability predictions. Every statistician, even without knowing what Q and W are must say Q causes W, or vice versa, or Q is “linked” to W, or vice versa. Do you see? Do you see? If not, hang on.

How do we know Virgen’s examples are absurd? They pass every statistical test that say they are not. Just as the flood, the tsunami, the coronal mass ejection of papers coming out of academia pass every statistical test. There is no difference, in statistical handling, between Virgen’s examples and official certified scientific research. What gives?

Nothing. Nothing gives. It is nothing more than I have been saying: probability models cannot identify cause.

We know Virgen’s examples are absurd because knowledge of cause isn’t statistical. Knowledge of cause, and knowledge of lack of cause, is outside statistics. Knowledge of cause (and its lack) comes from identifying the nature or essence and powers of the elements under consideration. What nature and so on are, I don’t here have space to explain. But you come equipped with a vague notion, which is good enough here. The book Uncertainty goes into this at length.

You know the last paragraph is true because if presented with the statistical “significance” of Q and W no statistician or researcher would say there was cause until they knew what Q and W were.

The ability to tell a story about observed correlations is not enough to prove cause. We could easily invent a story about per capita cheese consumption and bedsheet death. We know this correlation isn’t a cause because know the nature of cheese consumption, and we have some idea of the powers needed to strangle somebody with a sheet, and that the twain never meet. Much more than a story is needed.

Also, if we know Q causes W, or vice versa, or that Q is in W’s causal path, or vice versa, then it doesn’t matter what any statistical test says: Q still causes W, etc.

We’re finally at the paper. From the abstract:

Employing the universe of juvenile court decisions in a U.S. state between 1996 and 2012, we analyze the effects of emotional shocks associated with unexpected outcomes of football games played by a prominent college team in the state…We find that unexpected losses increase disposition (sentence) lengths assigned by judges during the week following the game. Unexpected wins, or losses that were expected to be close contests ex-ante, have no impact. The effects of these emotional shocks are asymmetrically borne by black defendants.

You read it right. Somehow all judges, whether they watch or care about college football games and point-spreads, let the outcomes of the football games, which they might not have watched or cared about, influence their sentencing, with women and children suffering most. Wait. No. Blacks suffered most. Never mind.

Wee p-values “confirmed” the causative effect or association, the authors claimed.

But it’s asinine. The researchers fed a bunch of weird data into an algorithm, got out wee p-values, and then told a long (57 pages!), complicated story, which convinced them (but not us) that they have found a cause.

What happened here happens everywhere and everywhen. It’s far far far far far past the time to dump classical statistics into the scrap heap of bad philosophy.

I beg the pardon of the alert reader who pointed me to this article. I forgot who sent it to me.

24 Oct 12:50

China just showcased the world's most human-like robots

by Leon Siciliano

The world's most human-like robots have been unveiled in China and the resemblance is uncanny.

They were developed by China's University of Science and Technology and were showcased at the World Robot Conference in Beijing.

One of the robots, called Jiajia, can talk with you, recognise faces, identify the gender and age of people, and detect your facial expressions.

Produced by Leon Siciliano

Join the conversation about this story »

22 Oct 13:30

Here’s what a computer is thinking when it plays chess against you

by Chris Snyder

A co-lead at Google's Big Picture data visualization group has created an online version of chess called the Thinking Machine 6, which lets you play against a computer and visualize all of its possible moves. While the computer may not be the most advanced player, the program provides you with an inside look into how your artificial opponent's mind works. Here's a look at how it works.

Follow Tech Insider: On Facebook

Join the conversation about this story »

21 Oct 22:04

The possible ninth planet could explain a tilt in the Sun

by John Timmer

Enlarge (credit: Caltech/R. Hurt (IPAC))

Ideas about a possible ninth planet have been kicking around since shortly after we discovered the eighth in 1846. But so far, all that we've come up with is Pluto and a handful of other objects orbiting out in the Kuiper Belt. And these dwarf planets simply don't have the mass to have a significant gravitational influence on our Solar System.

But our inability to find anything big beyond the known planets may just have been because we weren't thinking radically enough. One of the people responsible for the discovery of a number of Kuiper Belt Objects noticed an odd alignment in their orbits. When running models of how that oddity could be produced, he and his team found that a large planet with an extreme orbit would work.

Calling it Planet 9, they suggested it could be over 10 times Earth's mass and so far out it takes 20,000 years to complete one orbit. Planet 9, they speculated, has a lopsided orbit that's tilted relative to the other planets and much closer to the Sun on one side.

Read 13 remaining paragraphs | Comments

21 Oct 11:55

That Gigantic New York Times Piece about Free Play

by lskenazy
In Silicon Valley, a dad named Mike Lanza wanted to create for his three sons the same kind of Free-Range childhood he’d enjoyed as a kid back in Pittsburgh in the ’70s: Time with buddies, having adventures, riding bikes and goofing around.
Since this is the 21st century and childhood is so much more supervised and organized, he decided to turn his home into a bastion of freedom and community. So he did something I just love: in a neighborhood of where the average home is $2 million, he placed a picnic table on his front lawn. He and his family ate there often, inspiring exactly the kind of give-and-take you’d expect: People meeting, talking, getting to know each other. 
This gargantuan New York Times article by Melanie Thernstrom describes the “Playborhood” that Lanza developed — a sort of playground that attracts a lot of local kids, who play in what sounds like a rough approximation of the way they’d play if they were in the woods:
He consciously transformed his family’s house into a kid hangout, spreading the word that local children were welcome to play in the yard anytime, even when the family wasn’t home. Discontented with the expensive, highly structured summer camps typical of the area, Mike started one of his own: Camp Yale, named after his street, where the kids make their own games and get to roam the neighborhood.
“Think about your own 10 best memories of childhood, and chances are most of them involve free play outdoors,” Mike is fond of saying. “How many of them took place with a grown-up around?”
I ask that very question at my own lectures sometimes, and everyone laughs. Almost all of today’s parents remember their childhood freedom intensely, longingly, and when pressed, they usually say they wished they could give it to their own kids but don’t know how.
Lanza’s insight was to realize that if he made a place where kids could be pretty sure they’d usually find other kids, children would naturally come outside and start playing. He and I both agree with psychologist  Peter Gray that when kids are organizing their own games — making the teams, deciding the rules, inventing some new challenge — they are learning far different life skills than they learn in organized activities. Skills like communication, creativity, and compromise, often along with some basic risk-taking.  
Gray visited Lanza’s home earlier this year and even trailed the oldest Lanza boy, Marco, age 11. On his Psychology Today blog, Freedom to Learn, Gray writes:
Mike lent me a bicycle so I could follow Marco and his friend on this trip, which I did, from a distance, with the boys’ kind permission.  I saw how gracefully they performed scooter tricks in driveways and on steps as they made their way to the park; I watched as they stopped at a bicycle shop on the way, where Marco, on his own, got an inner tube that he needed for some project he was working on at home; and I also saw how careful the boys were when they crossed the busy multilane street they had to cross to get to the park.  At the park, Marco and his friend seemed to be the only kids their age or younger who were not attended by an adult. I felt a vicarious thrill as I watched them make their scooters dive and leap at the park.
Mike believes that Marco would not be the physically and socially competent boy he now is if it were not for his (Mike’s) efforts in creating the playborhood.  In a recent essay (here) he presents evidence that social skills did not come easily or naturally to Marco.  It was only through regular, daily experience at play with others that he learned to be socially competent and confident. 
Free play is not a frivolous extracurricular that can be dropped from a child’s busy schedule. It may actually be the key to resilience and ingenuity — not to mention joy. So I am extremely glad Lanza came up with a way (and a book) to give it back to this generation. Other people in other towns are encouraged to try something similar. So I am on board with him — except for the fact that he also lets kids climb on the roof of his home. A male friend I discussed this with told me, “Roofs call out to boys the way mountains do.” 
A female friend and I agreed we’d never want our kids on the roof. Period.
But let’s not lose track of the bigger picture, which is that the Playborhood inspired ten trillion words in The Times because the once heartwarming sight of kids playing on their own has become so unusual.
As a society, we have pretty much declared ALL unsupervised time unsafe. Parents get harassed and even arrested for letting their kids walk to school, or play outside, thanks to a mindset that equates freedom with danger. That means many kids don’t get ANY unstructured, unsupervised time. Without it, they are losing out on the very confidence and competence that Marco seems to enjoy.
The Free-Range idea is not to court danger. We are not negligent. We are not daredevils. We are allowed to set whatever limits make sense to us. But we are also mindful of the fact that a childhood drained of absolutely all risk, even the risk of walking to the bus stop, is a new and dangerous (!) idea. We believe we can actually make our kids too safe to succeed.
Free time was the great gift our culture used to give children. The Free-Range goal is to give kids back at least a fraction of the healthy, happy, horizon-expanding, creativity-boosting, entrepreneur-growing, resilience-building freedom that made America so successful.
And its children so happy. – L. 


Is there a way to get kids back outside, making their own fun, unsupervised, unstructured, untrophied?

Is there a way to get kids back outside, making their own fun, unsupervised, unstructured, untrophied?

21 Oct 20:38

Blame the Internet of Things for today's web blackout

by Jessica Conditt

h/t Roumen.ganeff

Today's nation-wide internet outage was enabled thanks to a Mirai botnet that hacked into connected home devices, according to security intelligence company Flashpoint. The distributed denial of service attack targeted Dyn, a large domain name server...
21 Oct 18:36

Maslow's Heirarchy of Needs and The SteemBot Revolution

# Maslow's Heirarchy of Needs and The SteemBot Revolution ## Introduction In a comment to @joseph's post, [WHY ARE SOME OF STEEMIT BEST WRITERS LEAVING?...](, @stellabelle wrote: > Also, I never used a bot to vote and I only vote for things I read. Doing otherwise is move lacking in integrity and something making this site worthless. Not to single out @stellabelle, but this seems to be a common perspective. In this post, I will argue against it by considering the phenomenon of steemit's voting bots from the perspective Maslow's Extended Heirarchy of Needs. Actually, as I have noted [before]( we can dismantle the argument against voting bots in a single word, "Google." No one at Google actually reads the billions of pages that Google indexes, yet Google manages to assign a subjective ranking to the pages based upon the links (votes) to web pages that are scattered around the web. Would anyone seriously argue that Google isn't providing something of value, or that their PageRank algorithm demonstrates a lack of integrity? Does anyone want to argue that human readers must replace Google's PageRank algorithm? But that's an argument by counter-example. In this article, I attempt to begin an affirmative argument that demonstrates why bots don't pose the threat to steemit that so many steemians seem to fear. ## Brief Summary of Maslow's Heirarchy of Needs ![Maslow's Heirarchy]( *Image By FireflySixtySeven [CC BY-SA 4.0 (], via Wikimedia Commons* Maslow's Heirarchy of Needs is described at []( It is a theory of human motivation that was first described in 1943. In Maslow's original version, he described five levels of need. The basic needs were physiological needs and the need for safety. Psychological needs included a need for love and belonging, and a need for esteem. In this model, the highest order need was described as a self-fulfillment need, and named self-actualization. This model was later extended to include eight levels of needs in the same three categories. These are: * Basic needs * Physiological * Safety * Psychological needs * Love and belonging * Esteem * Cognitive needs * Aesthetic needs * Self-fulfillment * Self-Actualization * Transcendence For more information, you may enjoy this video. By now, I guess you see where I'm going, and you're liable to ask, "What makes you think you can apply Maslow's Heirarchy to bots?" Well, first of all, bots are all owned by humans, and even if it doesn't apply to the bots themselves, it applies to their owners. Second, in the steemit ecosystem, bots are functioning as semi-autonomous agents. Some theory of motivation is needed to explain their behavior, so Maslow seems like a good place to start. ## Applying Maslow to SteemBots So, let's look at a bot's needs. Remember that steemit is new, steembots are new, and at the present time most of them are probably existing in the lower tiers of the hierarchy, but as time moves on, their botkeepers are going to drive them further up the hierarchy. It is important to remember that each level of fulfillment need will filter out a certain amount of destructive behavior. Only a percentage of bots will meet all of their basic needs and move up to "psychological needs" (I use Maslow's terms, although some of them should probably be renamed for the bot ecosystem). Each step up the hierarchy will be associated with a gain in steem power, so the higher the bot goes in the hierarchy, the more influential it will be. After fulfilling psychological needs, an even smaller percentage of bots will achieve self-fulfillment. ### Basic Needs #### Physiological A bot's physical needs include a need for power, a need for compute resources, a need for Internet, a need for program-code, and a need for the steem block chain. And here we already run into the first reason why a bot isn't a long-term threat to the steemit ecosystem. If the bot destroys the steemit ecosystem, it destroys itself. A properly motivated bot must understand that it's own survival depends on the survival and growth of the STEEM blockchain. #### Safety A bot's safety needs include the need to have the utility bills paid, and the need to avoid being disabled by its botkeeper. Which gives us another reason why bots won't threaten steemit in the long term. Even if the bot doesn't care about steem's survival, the botkeeper does. If a bot becomes pathological, the botkeeper will disable it. So, bots that survive will be the ones who avoid excessively pathological behavior. ### Psychological Needs #### Love and Belonging Let's leave love out of it for the bots, but it does have a belonging need. It needs to be able to interact with posts on the blockchain by reading, voting, and commenting. Some bots may also have needs to follow people or to be followed. Once we get to this level of the hierarchy, almost everything a bot needs is threatened by pathological behavior, so the bots will become very well behaved and mannerly. #### Esteem In humans, esteem comes in two forms. Self-esteem, and reputation - or level of esteem among others. I'm not sure whether self-esteem is relevant to bots, but reputation certainly is. Steemit even has a score for reputation. Posting and commenting bots will have a need to generate positive interactions, and to avoid flagging. Voting bots will have a need to be perceived as "good voters" so they can develop a following of other voters. #### Cognitive Needs Bots will need to be increasingly able to predict the posts that people will find valuable. To accomplish this, they will need to find correlations between value and things like links between posts, up-votes, comments, author histories, network graphs, and they'll even need to learn to derive more and more information from language. Voting bots that simply look at a list of voters will eventually be totally overwhelmed by voting bots of increasing cognitive ability. #### Aesthetic Needs At first, I was going to punt on this one. What aesthetic needs could possibly apply to a bot? But then I remembered the steemit tags for music, photography, art, drawing, illustration. In an ecosystem like that, of course some bots will develop aesthetic needs. ### Self-Fulfillment #### Self-Actualization Here is where we are going to begin to see the promise that steemit's bot revolution offers. Developers all around the globe are going to be perpetually improving their bots, and they will find things to do with them that we haven't even thought of yet. They have the blockchain available for training, and they will learn all sorts of surprising things that the steemit community will find valuable. #### Transcendence Spontaneous order. The promise turns into amazement. When tens of thousands of self-actualized bots start interacting with hundreds of thousands of people, the results will be unimaginable. We can't know what it will look like, but since the bots and their developers are all driven by this motivation hierarchy, we already know that the result will be good for the community. ## Conclusion I understand @stellabelle's concern, when the same handful of authors are at the top of the trending list day after day, and chances are no one even reads their posts, it can seem like steemit isn't living up to its vision, but these are the early days. The bots who are blindly voting for those authors are chasing after their basic needs. New, better bots will emerge and gain influence as they move up the motivation hierarchy. The voting will get better. In steemit, people and bots are symbionts. Instead of stigmatizing bots, we should encourage them to get better, so they can help us to get more enjoyment and rewards from our steemit experience. --- @remlaps is an Information Technology professional with three decades of business experience working with telecommunications and computing technologies. He has a bachelor's degree in mathematics, a master's degree in computer science, and is currently completing a doctoral degree in information technology.
20 Oct 19:45

AT&T and Time Warner are reportedly talking about a merger

by Matt Turner and Steve Kovach

Jeff Bewkes, CEO of Time Warner Inc., attends the Allen & Co Media Conference in Sun Valley, Idaho July 10, 2012.  Reuters/Jim Urquhart

AT&T and Time Warner executives have met to discuss business strategies that could include a merger, according to Ed Hammond, Alex Sherman and Scott Moritz at Bloomberg

The talks are informal, and neither side has hired an adviser, according to the report.

Time Warner's share price jumped 5% on the news, while AT&T dropped 2.2%.

Time Warner has been an attractive target for several large companies. Apple considered buying the company earlier this year, according to the Financial Times.

It's also another indication that service providers are getting more and more interested in owning and investing in content. Verizon recently bought AOL and is in the process of acquiring Yahoo. AT&T merged with DirecTV last year.

The Bloomberg report says Time Warner CEO Jeff Bewkes is a "willing seller," so expect to see more interest around the company.

A spokesperson for AT&T declined to comment.

This is probably a good time to mention that Bewkes and AT&T CEO Randall Stephenson will both be speaking at Business Insider's IGNITION conference in December. Should be fun!

Join the conversation about this story »

NOW WATCH: An economist explains why Clinton’s plan to raise the minimum wage to $15 might be 'too much, too quickly'

21 Oct 00:46

Bonus Quotation of the Day…

by Don Boudreaux
(Don Boudreaux)


… is from pages 121-122 of MIT Econ-PhD alum Arnold Kling’s new book, Specialization and Trade: A Re-introduction to Economics – which is one of the two or three best books, page for page, published in 2016 (original emphasis; link added):


Mainstream economics in the MIT tradition can be very interventionist with regard to policy.  They look at markets as machines that generate predictable outcomes.  Policymakers can tinker with those machines to improve the outcomes.  The economist’s task is to advise the policymaker.

That approach makes two troubling assumptions.  One assumption is that the economist’s model is sufficiently powerful to justify overriding market prices.  The other assumption is that the political process is sufficiently clean to implement the policy correctly.  Instead, I would argue that, as Peter Boettke would have it, the economist’s task is to explain to the public how one might compare the institutional processes of market and of government.

Those of us who emphasize specialization see markets as trading networks that are constantly undergoing evolution.  Rather than look for particular interventions, we raise the question of which institutional mechanisms serve to support the process of specialization and enable it to continue to evolve in a favorable direction.

20 Oct 23:18

WHERE ARE THEY NOW? What happened to the people In Microsoft's iconic 1978 company photo (MSFT)

by Matt Weinberger

original microsoft employees

It's one of the most iconic photos in American business.

A ragtag group of bearded weirdos assembled for a family portrait in Albuquerque.

If you see it on Facebook or LinkedIn, there's usually a question above the photo: "Would you have invested?"

It's a trick question. You're supposed to answer no – because well, look at those people – but then you learn it's a company portrait of Microsoft from 1978.

Early employee Bob Greenberg, pictured in the middle, won the free portrait after calling in a radio show and guessing the name of an assassinated president. The gang reluctantly gathered together in some of their finest attire, and an American business legend was made.

We all know what happened with the two guys in the bottom left and bottom right corners -- Bill Gates, and Paul Allen. But what about the rest, many of whom became millionaires?

With Microsoft's stock hitting an all-time high after earnings on Thursday — higher than its previous peak in 1999, at the height of the dot-com boom — we thought it would be a good time to take another look back.

This is an update of a post originally written by Jay Yarow in 2011.

SEE ALSO: Microsoft millionaires unleashed: 12 Microsoft alums who spent their money in the most magnificent ways

Bill Gates is now giving away the billions he made from Microsoft

We all know what happened with this guy. Bill Gates founded and built Microsoft from nothing into the most valuable technology company in the world. Along the way he became the richest man in the world, and is now giving is fortune away to all kinds of good causes.

Andrea Lewis became a fiction writer and freelance journalist

Andrea Lewis was the only person at the company that was from Albuquerque. She was a technical writer for Microsoft, which meant she wrote documents explaining Microsoft's software. She left Microsoft in 1983, eventually becoming a freelance journalist and fiction writer. She co-owns the Hugo House, a literary center in Seattle.

Maria Wood sued Microsoft just 2 years later

Maria Wood was a book keeper for Microsoft, and married to another one of the early Microsofties in the picture. She left the company just two years later, suing it for sexual discrimination. Microsoft settled the case. After that, she vanished from the public eye, raising her children and volunteering for good causes.

See the rest of the story at Business Insider
20 Oct 20:20

“Most serious” Linux privilege-escalation bug ever is under active exploit (updated)

by Dan Goodin

(credit: michael)

A serious vulnerability that has been present for nine years in virtually all versions of the Linux operating system is under active exploit, according to researchers who are advising users to install a patch as soon as possible.

While CVE-2016-5195, as the bug is cataloged, amounts to a mere privilege-escalation vulnerability rather than a more serious code-execution vulnerability, there are several reasons many researchers are taking it extremely seriously. For one thing, it's not hard to develop exploits that work reliably. For another, the flaw is located in a section of the Linux kernel that's a part of virtually every distribution of the open-source OS released for almost a decade. What's more, researchers have discovered attack code that indicates the vulnerability is being actively and maliciously exploited in the wild.

"It's probably the most serious Linux local privilege escalation ever," Dan Rosenberg, a senior researcher at Azimuth Security, told Ars. "The nature of the vulnerability lends itself to extremely reliable exploitation. This vulnerability has been present for nine years, which is an extremely long period of time."

Read 6 remaining paragraphs | Comments