Shared posts

23 Sep 17:56

How I Make Explorable Explanations

by Nicky Case
How I Make Explorable Explanations

You want to share a powerful idea – an idea that could really enrich the lives of whoever you gift it to! But communication is hard. So how do you share an idea, in such a way that makes sure the message is received?

Well, that's easy. Do it like this:

How I Make Explorable Explanations

Nah, I'm kidding. The actual process is a lot more painful.

In this post, I'm going to share how I make explorable explanations: interactive things that help you learn by playing! Although my creative process involves a lot of backtracking and wrong turns and general flailing about, I have found a nice "pattern" for teaching things. There are no plug-and-chug formulas, but hopefully this post can help you help others learn something new – whether that's through reading, through watching, or through playing.

And the first thing to do is start with...

How I Make Explorable Explanations

1) Start With 🤔?

“What makes [traditional teaching] so ineffective is that it answers questions the student hasn’t thought to ask. [...] You have to help them love the questions.”

~ Steven Strogatz, "Writing about Math for the Perplexed & the Traumatized"

Practicing what I preach: this very blog post starts with an important question that everyone cares about – "how do you share an idea?"

But you don't have to make the question so blatantly in-your-face. In The Evolution of Trust, I posed the question in the form of a story: why & how did WWI soldiers create peace in the trenches? And in Parable of the Polygons, I posed the question in the form of a game: why & how does a small individual bias result in large collective segregation?

However you choose to do it, you've got to make your reader / viewer / player curious – you've got to make them love your question.

Only then, will they be motivated to make the long, hard climb up the...

How I Make Explorable Explanations

2) Up The Ladder of Abstraction

Yeah I'm mixing my metaphors a bit here with hills and ladders but WHATEVER, the point is you've got to start grounded, then move your way up, step by step, slowly.

You may think that's obvious. But, seeing how many lecturers spew abstract jargon – talking in the clouds while their audience is still on the ground – yeah, no. Apparently it's not obvious. (Alternatively, some people try to "dumb it down" for the public. But the goal shouldn't be to dumb the ideas down, it should be to smart the people up.)

So: start on the ground. The very first thing you should do is give the reader a concrete experience. In Parable of the Polygons, you start by directly dragging & dropping a neighborhood of shapes. In The Evolution of Trust, you start by directly playing against a bunch of opponents. The trick is to pick an experience that will be a good foundation for everything else you'll be building on top of it.

Then, move up, step by step. I think a good logical argument is like a good story: it shouldn't be "one damn thing after another". Matt Stone & Trey Parker once said that instead of making stories like this: "this happens, and then that happens, and then that happens, etc"... you should make stories like this: "this happens, THEREFORE that happens, BUT that happens, THEREFORE this happens, etc".
(for more on this idea, watch Tony Zhou's brilliant video essay on structuring video essays)

The same is true of any good explanation. In The Evolution of Trust, I tried to connect as many points as I could with BUT: "You can both win if you both cooperate BUT in a single game you'll both cheat BUT in a repeated game cooperation can succeed BUT in this scenario cheaters take over in the short term BUT in the long term the cooperators succeed again BUT..." and so on, and so on.

I like these big BUTs, and I cannot lie: it means I can show off a new counter-intuitive idea every few minutes! That's a story that's packed with plot twists.
(note: you may also sometimes want to step back down from the abstract to the concrete. check out Bret Victor's Up & Down The Ladder of Abstraction, which has inspired, like, 90% of my work.)

Anyway, once you've helped your reader reach the top of the hill / ladder / whatever metaphor we're using here, it's best to end with...

How I Make Explorable Explanations

3) End With 🤔?

You want to share a powerful idea – why's it powerful? How does your idea let people see further?

At the end of most of my explorables, I have a "Sandbox Mode". There's a sandbox at the end of Polygons, Trust, Ballot, Fireflies, Emoji Simulator... yeah to be honest, it's a bit of a cliché for me at this point, but here's the reason why I have those sandboxes:

In the beginning, I start by giving the player my question. And at the end, I want them to explore their own questions.

Once you've helped someone get to the top of a hill, your student can now see not just other hills that they didn't see before, but other hills that even you didn't see before. That's the true value of ending on an open-ended question: it allows the student to go beyond the teacher.

. . .

I feel like I've finally made it to the top of a tiny hill. I made my first explorable explanation 3½ years ago: a tutorial on making a cool visual effect for 2D games. And I've learnt a heck of a lot since then!

But the more I learn, the more I realize how much I've yet to learn. There's so much I want to try out. Heck, here's a list:

  • Explorables that aren't just single-player
  • Explorables that use real-world data
  • Explorables where you actually solve problems, not just puzzles
  • Explorables that don't follow a set linear story: it can change its lesson based on the reader's interests & prior knowledge.
  • Explorables that are partially user-generated
  • Explorables that allow dialogue between peer learners
  • Explorables that aren't standalone experiences, but something you can come back to again and again over time.
  • Explorables in VR, or AR, or just... R.
  • Explorables where you can actually make your own projects... such as making an explorable!

Trying out all of that seems pretty daunting, but 1) "How do you eat an elephant? One bite at a time." And 2) a lot of other people are also interested in making explorables! It's impossible for any one person to climb all these hills, but collectively, we can explore this wild, weird terrain – and together, we can bite a lot of elephants! okay my metaphors are getting really mixed here

But the point is this: TRUE learning is a never-ending process. You start with a 🤔, you end with more 🤔. Like Sisyphus, every time we get to the top of a hill, we'll just have to go back down, to perform the climb again.

And I wouldn't have it any other way.


“Tiger got to hunt, bird got to fly;
Man got to sit and wonder 'why, why, why?'
Tiger got to sleep, bird got to land;
Man got to tell himself he understand.”


~ Kurt Vonnegut

04 May 16:22

Emergence: GIF-splained

by Nicky Case

...other than that, though, nobody agrees on a workable definition of "emergence", let alone a theory of emergence. But at least I can explain this much with a small GIF.

(Last week, I experimented with explaining stuff through a blog-comic. This week, I'm experimenting with a GIF-splanation. What do you think? Also, it's public domain, so it's free for you to use in your own blog or presentation or whatever! Download the GIF here.)

01 Jan 21:54

Russia Hysteria Infects WashPost Again: False Story About Hacking U.S. Electric Grid

by Glenn Greenwald

(updated below)

The Washington Post on Friday reported a genuinely alarming event: Russian hackers have penetrated the U.S. power system through an electrical grid in Vermont. The Post headline conveyed the seriousness of the threat:

The first sentence of the article directly linked this cyberattack to alleged Russian hacking of the email accounts of the DNC and John Podesta — what is now routinely referred to as “Russian hacking of our election” — by referencing the code name revealed on Wednesday by the Obama administration when it announced sanctions on Russian officials: “A code associated with the Russian hacking operation dubbed Grizzly Steppe by the Obama administration has been detected within the system of a Vermont utility, according to U.S. officials.”

The Post article contained grave statements from Vermont officials of the type politicians love to issue after a terrorist attack to show they are tough and in control. The state’s Democratic governor, Peter Shumlin, said:

Vermonters and all Americans should be both alarmed and outraged that one of the world’s leading thugs, Vladimir Putin, has been attempting to hack our electric grid, which we rely upon to support our quality of life, economy, health, and safety. This episode should highlight the urgent need for our federal government to vigorously pursue and put an end to this sort of Russian meddling.

Vermont Sen. Patrick Leahy issued a statement warning: “This is beyond hackers having electronic joy rides — this is now about trying to access utilities to potentially manipulate the grid and shut it down in the middle of winter. That is a direct threat to Vermont and we do not take it lightly.”

The article went on and on in that vein, with all the standard tactics used by the U.S. media for such stories: quoting anonymous national security officials, reviewing past acts of Russian treachery, and drawing the scariest possible conclusions (“‘The question remains: Are they in other systems and what was the intent?’ a U.S. official said”). 

The media reactions, as Alex Pfeiffer documents, were exactly what one would expect: hysterical, alarmist proclamations of Putin’s menacing evil:

The Post’s story also predictably and very rapidly infected other large media outlets. Reuters thus told its readers around the world: “A malware code associated with Russian hackers has reportedly been detected within the system of a Vermont electric utility.”

 

What’s the problem here? It did not happen.

There was no “penetration of the U.S. electricity grid.” The truth was undramatic and banal. Burlington Electric, after receiving a Homeland Security notice sent to all U.S. utility companies about the malware code found in the DNC system, searched all its computers and found the code in a single laptop that was not connected to the electric grid.

Apparently, the Post did not even bother to contact the company before running its wildly sensationalistic claims, so Burlington Electric had to issue its own statement to the Burlington Free Press, which debunked the Post’s central claim (emphasis in original): “We detected the malware in a single Burlington Electric Department laptop not connected to our organization’s grid systems.”

So the key scary claim of the Post story — that Russian hackers had penetrated the U.S. electric grid — was false. All the alarmist tough-guy statements issued by political officials who believed the Post’s claim were based on fiction.

Even worse, there is zero evidence that Russian hackers were even responsible for the implanting of this malware on this single laptop. The fact that malware is “Russian-made” does not mean that only Russians can use it; indeed, like a lot of malware, it can be purchased (as Jeffrey Carr has pointed out in the DNC hacking context, assuming that Russian-made malware must have been used by Russians is as irrational as finding a Russian-made Kalishnikov AKM rifle at a crime scene and assuming the killer must be Russian).

As the actual truth emerged once the utility company issued its statement, the Post rushed to fix its embarrassment, beginning by dramatically changing its headline:

The headline is still absurd: They have no idea that this malware was placed by a “Russian operation” (though they would likely justify that by pointing out that they are just stenographically passing along what “officials say”). Moreover, nobody knows when this malware was put on this laptop, how, or by whom. But whatever else is true, the key claim — “Russian hackers penetrated U.S. electricity grid” — has now been replaced by the claim that this all shows “risk to U.S. electrical grid.”

As journalists realized what did — and did not — actually happen here, the reaction was swift:

 

This matters not only because one of the nation’s major newspapers once again published a wildly misleading, fearmongering story about Russia. It matters even more because it reflects the deeply irrational and ever-spiraling fever that is being cultivated in U.S. political discourse and culture about the threat posed by Moscow.

The Post has many excellent reporters and smart editors. They have produced many great stories this year. But this kind of blatantly irresponsible and sensationalist tabloid behavior — which tracks what they did when promoting that grotesque PropOrNot blacklist of U.S. news outlets accused of being Kremlin tools — is a byproduct of the Anything Goes mentality that now shapes mainstream discussion of Russia, Putin, and the Grave Threat to All Things Decent in America that they pose.

The level of groupthink, fearmongering, coercive peer pressure, and über-nationalism has not been seen since the halcyon days of 2002 and 2003. Indeed, the very same people who back then smeared anyone questioning official claims as Saddam sympathizers or stooges and left-wing un-American loons are back for their sequel, accusing anyone who expresses any skepticism toward claims about Russia of being Putin sympathizers and Kremlin operatives and stooges.

But it’s all severely exacerbated by social media in ways that we don’t yet fully understand. A large percentage of journalists sit on Twitter all day. It’s their primary window into the world. Because of how intense and raw the emotions still are from Trump’s defeat of Clinton, the social media benefits from tweeting and publishing unhinged claims about Trump and Putin are immense and immediate: thousands upon thousands of re-tweets, a rapidly building follower count, and huge amounts of traffic.

Indeed, the more unhinged it is, the greater the benefits are (see some of the most extreme examples here). That’s how otherwise rational people keep getting tricked into posting and re-tweeting and sharing extremely dubious stories that turn out to be false.

And that’s to say nothing of the non-utilitarian social pressures. It’s not news that coastal elites — particularly media and political figures — were and are virtually unified in their unbridled contempt for Trump. And we have seen over and over that any time there is a new Prime Foreign Villain consecrated — now Putin — U.S. media figures lead the campaign. As a result, any denunciation or accusation toward Trump or Russia, no matter how divorced from reason or devoid of facts, generates instant praise, while any questioning of it prompts instant peer-group denunciation, or worse.

Few things are more dangerous to the journalistic function than groupthink, and few instruments have been invented that foster and reinforce groupthink like social media, particularly Twitter, the platform most used by journalists. That’s a phenomenon that merits far more study, but examples like this one highlight the dynamic.

In this case, the effect is a constant ratcheting up of tensions between two nuclear-armed powers whose nuclear systems are still on hair-trigger alert and capable of catastrophic responses based on misunderstanding and misperception. Democrats and their media allies are rightly alarmed about the potential dangers of Trump’s bellicose posture toward China, but remarkably and recklessly indifferent to the dangers of what they themselves are doing here.

* * * * *

Those interested in a sober and rational discussion of the Russia hacking issue should read the following:

(1) Three posts by cybersecurity expert Jeffrey Carr: first, on the difficulty of proving attribution for any hacks; second, on the irrational claims on which the “Russia hacked the DNC” case is predicated; and third, on the woefully inadequate, evidence-free report issued by the Department of Homeland Security and FBI this week to justify sanctions against Russia.

(2) Yesterday’s Rolling Stone article by Matt Taibbi, who lived and worked for more than a decade in Russia, titled: “Something About This Russia Story Stinks.”

(3) An Atlantic article by David A. Graham on the politics and strategies of the sanctions imposed this week on Russia by Obama; I disagree with several of his claims, but the article is a rarity: a calm, sober, rational assessment of this debate.

Since it is so often distorted, permit me once again to underscore my own view on the broader Russia issue: Of course it is possible that Russia is responsible for these hacks, as this is perfectly consistent with (and far more mild than) what both Russia and the U.S. have done repeatedly for decades.

But given the stakes involved, along with the incentives for error and/or deceit, no rational person should be willing to embrace these accusations as Truth unless and until convincing evidence has been publicly presented for review, which most certainly has not yet happened. As the above articles demonstrate, this week’s proffered “evidence” — the U.S. government’s evidence-free report — should raise rather than dilute suspicions. It’s hard to understand how this desire for convincing evidence before acceptance of official claims could even be controversial, particularly among journalists.

 

UPDATE: Just as The Guardian had to do just two days ago regarding its claim about WikiLeaks and Putin, the Washington Post has now added an editor’s note to its story acknowledging that its key claim was false:

Is it not very clear that journalistic standards are being casually dispensed with when the subject is Russia?

The post Russia Hysteria Infects WashPost Again: False Story About Hacking U.S. Electric Grid appeared first on The Intercept.

29 Dec 17:13

A Constellation of Guiding Stars

by Nicky Case
A Constellation of Guiding Stars

As my mama always said, every pipe bomb has a silver lining.

2016 hasn't been the most uplifting of years, world-events-wise. This summer we saw two lone-wolf terrorist attacks, shootings of police and by police, as well as a violent military coup attempt. And personally, I'm worried by the rise of zero-sum nationalism in the US, UK, India, China, Sweden, France, Germany, etc in the past year – even if it is an understandable backlash against the irresponsible form of globalism in the past decade.

But if this year's been a dumpster fire, it should be a dumpster fire under our ass. It should kick our butts out of learned helplessness and/or complacency. I know 2016's left many of my friends de-motivated, but personally, I've never felt so motivated in my life. Because, for the first time in a long time, I have a newfound clarity over what I must do.

And I need to share this feeling with the world. So, after much reflecting on this past year, here's my three guiding stars that I'll do my best to follow in the new year:

Cooperation, Complexity, and Consonance.

A Constellation of Guiding Stars (pictured: an example of cooperative symbiosis – the hummingbird gets nectar from the flower, and in exchange, the flower gets a bird inside it. Free trade!)

“Hey mom, what's that word for when two people are competing, but they both get what they want?”

“Compromise.”

“No, like... both people end up happy?”

“Win-Win.”

“More scientific-sounding than that.”

[fifteen flash-backs later]

“Non-zero-sum game.”

“Yeah. Yeah that's the word. Thanks, mom!”

~ from Arrival, my favorite non-Zootopia movie of 2016

Guys. It's time for some game theory.

Too often we think of our politics and personal relationships as "zero-sum games", like sports or chess, where "they" must lose for "us" to win. Immigrants must lose for native citizens to win. Majority groups must lose for minorities to win. The working class must lose for the larger economy to win. And so on.

But more often than not, the real world is non-zero-sum: everyone can win – as long as you're patient and creative enough to find long-term Win-Win solutions. For example: a revival of civic nationalism could help both immigrants and natives win. Community-oriented policing could help both cops and minority communities win. Job re-training & trade schools could help both the working class and the larger economy win.

Maybe. I don't know if those specific suggestions will work, but I do know if we want to solve the big pressing problems of this century, we have to break out of our zero-sum mindsets, and think non-zero-sum.

So how will I follow my guiding star of Cooperation, of non-zero-sum thinking, in 2017?

1. Join organizations committed to finding Win-Win solutions. One group I'm already volunteering with is Better Angels, an organization dedicated to depolarizing America, to re-build that "more perfect Union". Look forward to some of my collaborations with them in the coming year!

2. Take time to understand and befriend people outside my bubble. Earlier this year, I made a short comic to help peeps understand those on the other side of the left/right political spectrum. But that's just understanding people from an academic distance – nothing beats actually getting to know others, person-to-person, friend-to-friend.

3. Make games to teach people to think non-zero-sum. Now this one uses my specific skills and knowledge! I've already made many games to explain complex systems, I know quite a bit about the game theory and social psychology of cooperation, and I want to share all that with the world.

However, in order to find non-zero-sum solutions, we first have to break out of our simplistic ways of thinking, which brings me to my second guiding star in 2017...

A Constellation of Guiding Stars(ayyyyy it's everyone's favorite fractal, the Mandelbrot Set! this fractal is an example of how even simple rules can create infinite complexity)

Look at this GIF of a pendulum:

A Constellation of Guiding Stars

Pretty simple, and pretty predictable. That's what most of our businesses, policymakers, and world leaders strive for: predictability. But look what happens when we simply add another pendulum to the end of that pendulum:

A Constellation of Guiding Stars

Chaos. You can't predict what it'll do. And that's just a double-pendulum – what would that mean for far more complex systems like economics, politics, and culture, where the parts of the systems aren't just mindless physical objects, they're human beings who can change, react, and fight back in response to the policies you try to enforce?

When I talked about "irresponsible globalism" earlier, this was what I meant. The <scarequotes>elites</scarequotes> were over-confident in their ability to predict, plan, and engineer their desired solutions – just "export democracy" to Iraq, right? Maybe put all of Europe on a common currency? Ooh, those subprime mortgages look like a reeeeeal good investment... (To be fair, the leaders of the populist backlash against globalism don't have much better answers, either. Protectionism may hurt our economy in the long run, doesn't protect workers against automation anyway, and most importantly, it makes war in both directions more tempting. To quote Frédéric Bastiat, “When goods cannot cross borders, armies will.”)

There's no conspiracy, just cockiness.

So, what, does that mean that planning means nothing? Not at all! Think of a football game. It's a chaotic, complex system. There's no way to predict where the ball will be in a minute's time, but that doesn't mean a coach or player can't be highly skilled. No, what matters is not precision and prediction, but habits and heuristics. (In the case of football, habits = a player's reflexes, and heuristics = a coach's flexible game plan)

More examples: It's the difference between micro-managing, and giving your employees autonomy over their craft. It's the difference between planning your entire life's career path while you're still in high school, and allowing yourself to stumble upon serendipitous opportunities. It's the difference between revolution – which usually ends in bloody death – and evolution – which gave us all life on earth. (Evolution isn't just a heuristic, it's a meta-heuristic! Also, see this post I wrote this summer: Evolution, Not Revolution)

I have no idea if what I'm saying makes sense to someone who's not already familiar with chaos theory. But that's exactly why I need to build new habits and heuristics, to follow my guiding star of Complexity:

1. Make games to teach & apply complexity theory. The first step is to break people out of their old mindset of trying to control chaos, and give 'em a new mindset of trying to harness chaos. My words are incredibly bad at explaining this, so hopefully playful games will do a much better job at creating an intuition for complexity.

2. Make tools for thinking and talking in complexity. The second step is to help people use their new complexity-mindsets to tackle the problems we face today. To do this, I'll make tools so people can model real world problems as feedback loops, complex networks, agent-based simulations, etc. I don't know the solutions to the world's problems, but whatever they might be, I know one thing: we have to figure it out together.

3. Actually reach out to teachers, researchers, and policymakers. All the stuff I've made so far has been for a "general audience", but I think maybe 2017 is the year I actually reach out to "the experts", to also get them to think, talk, and teach Complexity as well.

But maybe now I've gone way too abstract, too meta. It's time for me to get intimately personal, with my final guiding star next year...

A Constellation of Guiding Stars (it's the neko marching band, what more do you want)

Think of humanity as a band playing a tune. To get a good result, you need two things. Each player’s individual instrument must be in tune and also each must come in at the right moment so as to combine with all the others.

But there is one thing we have not yet taken into account. We have not asked [...] what piece of music the band is trying to play. The instruments might be all in tune and might all come in at the right moment, but even so the performance would not be a success if they had been engaged to provide dance music and actually played nothing but Dead Marches.

Morality, then, seems to be concerned with three things:

Firstly, with fair play and harmony between individuals.

Secondly, with what might be called tidying up or harmonising the things inside each individual.

Thirdly, with the general purpose of human life as a whole: what man was made for: [...] what tune the conductor of the band wants it to play.

~ from C.S. Lewis's Mere Christianity, a book I haven't read yet but it's full o' tasty quotes

That's what I mean by Consonance: being in tune with others, being in tune with yourself, and altogether, singing a greater song.

We're failing on all three counts.

"Being in tune with others" – I don't think I need to recap the anger and extremism and political polarization we've seen this year. Everyone's thinking zero-sum Win-Lose, few are thinking non-zero-sum Win-Win.

"Being in tune with yourself" – gawd, everyone I know is an anxious wreck, including myself, honestly. Maybe it's our thirst for bloody, sensationalist "news". Maybe it's our culture's fostering of mental fragility. Maybe it's our fear of becoming unnecessary in an age of globalization & automation. Maybe maybe maybe.

"Singing a greater song" – You'd think the globalists would at least sing Lennon's ♫ Imagine there's no countries ♫ but instead it's all calling their opponents racist. And you'd think the nationalists would at least sing the praises of the Great American Dream, instead it's all calling their opponents elitist. We're no longer for things, we're simply against things. Things like "optimism" and "actually wanting to find solutions" are seen as naïve, and now it's Cool™ to be cynical – in fact, the only way I can convince people not to be cynical, is to be cynical about being cynical. So, here goes:

Cynicism is lazy, self-serving wank.
Meanwhile, THERE IS WORK TO BE DONE.

I try to live by the habit/heuristic, “Think Global, Act Local.” Thinking global means you actually have a greater song you want to sing, instead of becoming a self-serving narcissist. Acting local means you can effectively sing your part of the song, instead of becoming overwhelmed, helpless, cynical about everything that has to be done.

And acting on yourself is the first step to acting local. Be the change you want to see in the world, and whatnot. So, here's how I'll follow my guiding star of Consonance in 2017:

1. Get in tune with others. On a very very very personal note, I've realized that I have a breadth of friendship, but I don't really have a depth of friendship. Many friends, but not many close friends. Quite honestly, I feel lonely sometimes. I want to fix that next year.

2. Get in tune with myself. Know thyself, as some dead philosopher once said. 2016 was the year I finally took up Cognitive Behavioral Therapy, which gave me lots of good habits and heuristics to help with my anxiety. But I still need to go beyond fixing neuroses, and strengthening my core character. I'll start by honestly admitting my problems: I sometimes don't admit personal responsibility. I sometimes let myself get taken advantage of. I can get caught up in my violent intrusive thoughts. I want to fix all these next year, too.

3. Sing a greater song. And this is where I get hopelessly meta. I've already picked not just one song to sing along with others, but three: Cooperation, Complexity, and Consonance. And as you saw, these guiding stars are really a constellation – they all connect with each other in some deep way.

Of course, these aren't the only or even best guiding stars you could follow, and I encourage you to find the star(s) that are most meaningful to you. But I truly believe, not only would the Cooperation-Complexity-Consonance constellation help our society as a whole, following them would help you personally, as a human being.

So, I'm gonna try to make myself the best example I can be, in the coming year.

See you in 2017!

<3,
~ Nicky Case

12 Nov 13:40

Spotify is writing massive amounts of junk data to storage drives

by Dan Goodin

Enlarge / SSD modules like this one are being abused by Spotify. (credit: iFixit)

For almost five months—possibly longer—the Spotify music streaming app has been assaulting users' storage devices with enough data to potentially take years off their expected lifespans. Reports of tens or in some cases hundreds of gigabytes being written in an hour aren't uncommon, and occasionally the recorded amounts are measured in terabytes. The overload happens even when Spotify is idle and isn't storing any songs locally.

The behavior poses an unnecessary burden on users' storage devices, particularly solid state drives, which come with a finite amount of write capacity. Continuously writing hundreds of gigabytes of needless data to a drive every day for months or years on end has the potential to cause an SSD to die years earlier than it otherwise would. And yet, Spotify apps for Windows, Mac, and Linux have engaged in this data assault since at least the middle of June, when multiple users reported the problem in the company's official support forum.

"This is a *major* bug that currently affects thousands of users," Spotify user Paul Miller told Ars. "If for example, Castrol Oil lowered your engine's life expectancy by five to 10 years, I imagine most users would want to know, and that fact *should* be reported on."

Read 5 remaining paragraphs | Comments

26 Sep 13:11

Change is Made with Code

by Google Blogs
Kayle Sawyer

"What would the world look like if only 20 percent of women knew how to write?" (You're looking at it.)

What would the world look like if only 20 percent of women knew how to write? How many fewer great books would there be? How many important stories would go unreported? How many innovations would we lose? How many brilliant women would be unable to fulfill their potential?

That’s not just a theoretical question. Today, only a small minority of women know how to write code. That limits their ability to participate in a growing part of our global economy. It limits their ability to affect change as entire industries are transformed by technology. And it limits their potential to impact millions of lives through the power of code.

To change this trajectory, we need to do all we can to inspire women and girls that learning to code is critical to creating a brighter future for everyone. That’s why I’m excited to share that, today, Google’s Made with Code, together with YouTube, is teaming up with the Global Citizen Festival and millions of teen girls to ignite a movement for young women to change the world through the power of code.

Over the last five years, millions of Global Citizens have influenced world leaders and decision makers, and contributed to shaping our world for the better. As we’ve seen this movement grow, we’ve learned about some incredible women who saw problems in their communities and realized that the biggest impact they could have was through computer science. They’ve used an interest in computer science and tech to help the homeless, stop sexual assault, and bridge the gender gap in technology - check out their stories here:



These women are doing big things, blazing a path for the next generation of girls, but they can’t do it alone. The vast potential around using code to improve the world cannot be realized if there are only a few voices influencing how it’s shaped. That’s why, today, we’re inviting teen girls everywhere to join the movement. Our new coding project gives young women a chance to make their voice heard by coding a statement about the change they want to see in the world.
This week, hundreds of thousands of girls from around the country have already used code to share their vision for a better, more inclusive, more equitable world:


These coded designs will be displayed onstage at the Global Citizen Festival, as symbols of the many different voices from teen girls, standing up for the change they want to see in the world.

Together with musicians, sisters, YouTube sensations and newly minted coders,
Chloe x Halle, teen girls are getting their start in code


Our efforts go well beyond this project. Made with Code is joining forces with Iridescent and UN Women to support the launch of the Technovation Challenge 2017 which gives girls the opportunity to build their own apps that tackle the real-life issues they see around them.

Please tune into the Global Citizen Festival livestream at youtube.com/globalcitizen on September 24 to catch all the action. And, more importantly, join us and encourage the young women in your life to try out coding and contribute their ideas for how to make a better future.

Posted by Susan Wojcicki, CEO, YouTube https://3.bp.blogspot.com/-795tCTWiCSI/V-K3lylC2OI/AAAAAAAATDY/kwzdr3Zn-Lg7n8jeXmGAeW--NqOTuVaIACLcB/s1600/Screen%2BShot%2B2016-09-21%2Bat%2B9.38.06%2BAM.png Susan Wojcicki CEO YouTube
08 Sep 21:39

How to be perfectly unhappy

by Matthew Inman
17 May 01:56

What do rising mortality rates tell us?

by Anne Buchanan
When I was a student at a school of public health in the late '70s, the focus was on chronic disease. This was when the health and disease establishment was full of the hubris of thinking they'd conquered infectious disease in the industrialized world, and that it was now heart disease, cancer and stroke that they had to figure out how to control.  Even genetics at the time was confined to a few 'Mendelian' (single gene) diseases, mainly rare and pediatric, and few even of these genes had been identified.

My field was Population Studies -- basically the demography of who gets sick and why, often with an emphasis on "SES" or socioeconomic status.  That is, the effect of education, income and occupation on health and disease.  My Master's thesis was on socioeconomic differentials in infant mortality, and my dissertation was a piece of a large study of the causes of death in the whole population of Laredo, Texas over 150 years, with a focus on cancers.  Death rates in the US, and the industrialized world in general were decreasing, even if ethnic and economic differentials in mortality persisted.

So, I was especially interested in the latest episode of the BBC Radio 4 program The Inquiry, "What's killing white American women?" Used to increasing life expectancy in all segments of the population for decades, when researchers noted that mortality rates were actually rising among lower educated, middle-aged American women, they paid close attention.

A study published in PNAS in the fall of 2015 by two economists was the first to note that mortality in this segment of the population, among men and women, was rising enough to affect morality rates among middle-aged white Americans in general.  Mortality among African American non-Hispanics and Hispanics continued to fall.  If death rates had remained at 1998 rates or continued to decline among white Americans who hadn't more than a high school education in this age group, half a million deaths would have been avoided, which is more, says the study, than died in the AIDS epidemic through the middle of 2015.

What's going on?  The authors write, "Concurrent declines in self-reported health, mental health, and ability to work, increased reports of pain, and deteriorating measures of liver function all point to increasing midlife distress."  But how does this lead to death?  The most significant causes of mortality are "drug and alcohol poisonings, suicide, and chronic liver diseases and cirrhosis."  Causes associated with pain and distress.


Source: The New York Times

The Inquiry radio program examines in more detail why this group of Americans, and women in particularly, are suffering disproportionately.  Women, they say, have been turning to riskier behaviors, drinking, drug addiction and smoking, at a higher rate than men.  And, half of the increase in mortality is due to drugs, including prescription drugs, opioids in particular.  Here they zero in on the history of opiod use during the last 10 years, a history that shows in stark relief that the effect of economic pressures on health and disease aren't due only to the income or occupation of the target or study population.

Opioids, prescribed as painkillers for the relief of moderate to severe pain, have been in clinical use since the early 1900's.  Until the late 1990's they were used only very briefly after major surgery or for patients with terminal illnesses, because the risk of addiction or overdose was considered too great for others.  In the 1990's, however, Purdue Pharma, the maker of the pain killer Oxycontin, began to lobby heavily for expanded use.  They convinced the powers-that-be that chronic pain was a widespread and serious enough problem that opioids should and could be safely used by far more patients than traditionally accepted.  (See this story for a description of how advertising and clever salesmanship pushed Oxycontin onto center stage.)

Purdue lobbying lead to pain being classified as a 'vital sign', which is why any time you go into your doctor's office now you're asked whether you're suffering any pain.  Hospital funding became partially dependent on screening for and reducing pain scores in their patients.

Ten to twelve million Americans now take opioids chronically for pain.  Between 1999 and 2014, 250,000 Americans died of opioid overdose.  According to The Inquiry, that's more than the number killed in motor vehicle accident or by guns.  And it goes a long way toward explaining rising mortality rates among working-class middle-aged Americans.  And note that the rising mortality rate has nothing to do with genes.  It's basically the unforeseen consequences of greed.

Opioids are money-makers themselves, of course (see this Forbes story about the family behind Purdue Pharma, headlined "The OxyContin Clan: The $14 Billion Newcomer to Forbes 2015 List of Richest U.S. Families;" the drug has earned Purdue $35 billion since 1995) but pharmaceutical companies also make money selling drugs to treat the side effects of opioids; nausea, vomiting, drowsiness, constipation, and more.  Purdue just lost its fight against allowing generic versions of Oxycontin on the market, which means both that cheaper versions of the drug will be available, and that other pharmaceutical companies will have a vested interest in expanding its use.  Indeed, Purdue just won approval for use of the drug in 11-17 year olds.

In a rather perverse way, race plays a role in this epidemic, too, in this case a (statistically) protective one even though it has its roots in racial stereotyping.  Many physicians are less willing to prescribe opioids for African American or Hispanic patients because they fear the patient will become addicted, or that he or she will sell the drugs on the street.

"Social epidemiology" is a fairly new branch of the field, and it's based on the idea that there are social determinants of health beyond the usual individual-level measures of income, education and occupation.  Beyond socioeconomic status, to determinants measurable on the population-level instead; location, availability of healthy foods, medical care, child care, jobs, pollution levels, levels of neighborhood violence, and much more.

Obviously the opioid story reminds us that profit motive is another factor that needs to be added to the causal mix.  Big Tobacco already taught us that profit can readily trump public health, and it's true of Big Pharma and opioids as well.  Having insinuated themselves into hospitals, clinics and doctors' offices, Big Pharma may have relieved a lot of pain, but at great cost to public health.
13 May 19:16

This Moment is Enough

by zenhabits
Kayle Sawyer

The Zen approach to FOMO.

By Leo Babauta

I was in a plane descending into Portland for a quick stopover, and I gazed upon a brilliant pink sunrise over blue and purple mountains, and my heart ached.

Instinctively, I looked over to Eva to share this breath-taking moment, but she was sleeping. I felt incomplete, not being able to share the moment with her, or with anyone. Its beauty was slipping through my fingers.

This was a teachable moment for me: I somehow felt this moment wasn’t enough, without being able to share it. It took me a second to remind myself: this moment is enough.

It’s enough, without needing to be shared or photographed or improved or commented upon. It’s enough, awe-inspiring just as it is.

I’m not alone in this feeling, that the moment needs to be captured by photo to be complete, or shared somehow on social media. It’s the entire reason for Instagram, for instance.

We feel the moment isn’t enough unless we talk about it, share it, somehow solidify it. The moment is ephemeral, and we want solidity and permanence. This kind of groundlessness can scare us.

This feeling of not-enoughness is fairly pervasive in our lives:

  • We sit down to eat and feel we should be reading something online, checking messages, doing work. As if eating the food weren’t enough.
  • We get annoyed with people when they don’t act as we want them to — the way they are feels like it’s not enough.
  • We feel directionless and lost in life, as if the life we have is not already enough.
  • We procrastinate when we know we should sit down to do important work, going for distractions, as if the work is not enough for us.
  • We always feel there’s something else we should be doing, and can’t just sit in peace.
  • We mourn the loss of people, of the past, of traditions … because the present feels like it’s not enough.
  • We are constantly thinking about what’s to come, as if it’s not enough to focus on what’s right in front of us.
  • We constantly look to improve ourselves, or to improve others, as if we and they are not already enough as we are.
  • We reject situations, reject people, reject ourselves, because we feel they’re not enough.

What if we accepted this present moment, and everyone and everything in it, as exactly enough?

What if we needed nothing more?

What if we accepted that this moment will slip away when it’s done, and saw the fleeting time we had with the moment as enough, without needing to share it or capture it?

What if we said yes to things, instead of rejecting them?

What if we accepted the “bad” with the good, the failures with the attempts, the irritating with the beautiful, the fear with the opportunity, as part of a package deal that this moment is offering us?

What if we paused right now, and saw everything in this present moment around us (including ourselves), and just appreciated it for what it is, as perfectly enough?

29 Apr 00:50

How To Simulate The Universe, In 134 Easy Steps

by Nicky Case
How To Simulate The Universe, In 134 Easy Steps

Here's the text version of a talk I gave a few days ago!

My talk was about systems and simulations – and how they can make us more empathetic, more empowered, or at least, give us a deeper understanding of the world... or something like that.

How To Simulate The Universe, In 134 Easy Steps

How To Simulate The Universe, In 134 Easy Steps

How To Simulate The Universe, In 134 Easy Steps

For some reason, I never really considered a talk as a "real project". Coz I mean, it's just talking for half an hour, right? I do that all the time, whether or not the other person actually wanted to listen to me geek out about indie games for thirty minutes.

Only after typing up the text version of this talk, did I realize how much effort it actually is. I plugged it into a word count tool, and my talk is 3800+ words long. That's the longest thing I've ever written. Longer than Simulating The World In Emoji (2400+ words), longer than my rant on how we activists are doing things wrong (2800+ words), longer than my evo-bio-psychological dyke drama set in space (2500+ words).

And yet, 3800+ words is only half an hour of speech. (And so, the text version would take you just about twenty minutes to read)

So yeah, long story short, I'm going to start mentally counting talks as "real projects". (so I can feel better about having done them, and be more cautious about accepting offers to speak) They take a lot of effort, I do publish them online, and they actually seem to resonate a lot with people! Hopefully this talk hit a good balance of inspiring and actually practically useful.

My sketch-notes from the other talks (like I did for the systems-thinking workshop) will be coming soon!

How To Simulate The Universe, In 134 Easy Steps

* my talk's page's format was heavily inspired by idle words

25 Apr 23:48

Women on 20s

I get that there are security reasons for the schedule, but this is like the ONE problem we have where the right answer is both easy and straightforward. If we can't figure it out, maybe we should just give up and just replace all the portraits on the bills with that weird pyramid eye thing.
15 Apr 05:12

Because your computer only has so much space

by A Googler
Kayle Sawyer

finally, selective sync!

Google Drive for Mac/PC — the app that syncs files on your computer with Google Drive — is an easy way to make sure your files are safe and accessible from anywhere. Today, some new features are rolling out that’ll make your syncing and sharing experience even better.

Select what you sync

Drive can store terabytes (upon terabytes) but there’s a good chance your computer’s hard drive will run out of space if you sync everything. Fortunately, you can now select which folders or subfolders you want to sync — and deselect the ones you don’t.

When you deselect a folder, it’ll be removed from your computer but still kept safely in Drive. And Drive shows you the size of each folder, so you'll know how much space you're freeing up.drive_selective_sync.gif

Take care of shared files and folders

After you sync your files, Drive makes it easy to move and delete items directly from your computer. But doing that with shared files can cause others to lose access. Now, Drive warns you when this might happen.

shared_folder_warning.pngThese updates are rolling out over the next week or so. As always, stay in touch on Google+ and Twitter to let us know what you think.
Happy syncing!
Posted by Aakash Sahney, Google Drive Product Manager


15 Apr 00:54

Systems Thinking & Journalism: Sketchy Notes

by Nicky Case
Systems Thinking & Journalism: Sketchy Notes

This week, I went to this small two-day workshop for journalists to learn about Systems Thinking, and to figure out how to apply it to their own work! Here are my messy sketch-notes, with some personal reflections.

Systems Thinking & Journalism: Sketchy Notes

Good news & bad news – they skimmed over the basics of Systems Thinking. I already knew a lot about it, so I got a lot out of the workshop, but the other journalists in the room who were totally new to the concept may have been lost. So, before I get into my notes + reflections, here’s my quick intro to what Systems Thinking is, by contrasting it to the conventional mindset:

CONVENTIONAL THINKING: linear cause-and-effect
(A causes B causes C, and so on)

SYSTEMS THINKING: nonlinear cause-and-effect
(A affects B, but B can also affect A. That is, feedback loops – like the vicious cycle of an arms race, or the balancing forces of ecosystems)

And Systems Thinking peeps also usually use visual diagrams to map these feedback loops. For example, here’s one on drug use, mental health, and criminal justice: (the Snowballs are loops that compound/reinforce something, the Seesaws are loops that bring back balance)

Systems Thinking & Journalism: Sketchy Notes MMMM, I SURE LOVE JPEG ARTIFACTS

But this nonlinearity makes these kinds of systems hard to analyze (with traditional methods), since there no longer is any one “root” cause, if all things can affect all things. Lotsa systems are like this – political, economic, societal, etc… all ripe topics for a journalist to pick apart!


Alright, now with that super-short intro out of the way, here’s a two-page summary of my sketch-notes from the workshop:

Systems Thinking & Journalism: Sketchy Notes

First page:
How to tell stories about systems, or stories with systems.

  • Start with a concrete human story, then expand outwards to the whole system.
  • How do you tell a nonlinear story with a linear medium (like text, film, audio)? One good example is The Wire. One could also use nonlinear mediums like comics (e.g. Chris Ware) or interactive stories/videogames.
  • Systems have “plot twists”: like how surprisingly quickly a feedback loop can spiral out of control.
  • At the philosophical, emotional core of system-stories: empathy. Individuals may have noble motives, but are constrained by the system. “None are to blame, but all are responsible.”

Systems Thinking & Journalism: Sketchy Notes

Second page:
How to use Systems Thinking to positively affect the world!

  • “You don’t fix systems. You can only evolve & influence ‘em, so they can change themselves.”
  • In Foreign Aid: “implanting” a foreign fix is like taping flowers onto a tree, instead of helping the tree grow its own flowers.
  • How To Influence, But Not Control: Show positive examples that counter bad trends. Show them things that resonate, or “go viral”. Give ‘em the tools to help themselves, then back off.
  • In Conflict Mediation: (one of the instructors is a conflict mediator and peacemaker) Get input from diverse people. Get all sides to stop blaming each other, and instead, show them how the system is their common enemy. And positively reinforce and highlight moderates, and people who cross-cut different groups.

At the end of the workshop, we also had a group discussion on how to apply Systems Thinking to journalism, in practice and in teaching curriculum:

Systems Thinking & Journalism: Sketchy Notes


There you have it, the workshop summarized into three pages of notes! I did take more sketch-notes with my own personal reflections, but I worry maybe they’re too idiosyncratic, or don’t have enough context attached. But hey, I’ll post them here anyway:

Systems Thinking & Journalism: Sketchy Notes Systems Thinking & Journalism: Sketchy Notes Systems Thinking & Journalism: Sketchy Notes Systems Thinking & Journalism: Sketchy Notes Systems Thinking & Journalism: Sketchy Notes Systems Thinking & Journalism: Sketchy Notes Systems Thinking & Journalism: Sketchy Notes Systems Thinking & Journalism: Sketchy Notes Systems Thinking & Journalism: Sketchy Notes Systems Thinking & Journalism: Sketchy Notes

(as an Imgur album)


Other, Overall Personal Reflections:

See this essay I wrote a while back, The Science of Social Change. On real empathy, and how we activists are doing things wrong.

  • I think what resonates with me most is the idea that good people can be trapped in a bad system. The idea that most people have fair – sometimes noble – reasons for doing what they’re doing. That’s real empathy.
  • Project-wise, I’m most interested in interactive art & simulations – and systems thinking can help a lot with that! Instead of just a static causal-loop diagram, maybe there’s a way to make it interactive, to be able to ask “what if” hypothetical questions, and find paths to change the world’s systems?
  • The two instructors are peacemakers / conflict mediators. That kind of task inspires me. Since I’m now in journalism and the US Election 2016 is highlighting how polarized we are, maybe Systems Thinking can help all sides see that their enemies aren’t each other – but rather, they have a common enemy: the system? (Can’t we all just get along?!...)

For a less messy and far better introduction to Systems Thinking, here’s the book that got me started down this path: Thinking In Systems, by Dona Meadows. I also made an interactive explainer of systems thinking… using emoji: Simulating The World (In Emoji 😘)

If you're an educator, journalist, or just a curious person, Systems Thinking can really help you a lot. Hope these messy notes could be a good introduction for you!

22 Mar 14:25

Creating With Contradictions

by Nicky Case

Imagine two students care a lot about the fishing industry. Maybe this is an Alaskan school, who knows.

The first student, Alice, cares mostly about the ecological impact of the industry. She knows ecologies have a fine balance, so she makes a simplified model of the marine ecosystem, like so:

ecologic

More plankton means more fish, but more fish means less plankton. This is a basic balancing loop, and so, this ecological system stays in equilibrium.

Meanwhile, the other student, Bob, cares more about the economics side. He knows that markets also tend towards equilibrium, via a self-correcting process of supply & demand. So, he models the fishing industry like this:

economic

Another balancing loop, another equilibrium. If the price goes up, that's a signal for fishers to fish more, which increases the supply of fish on the market, which drives the price back down again.

Alice and Bob then combine their models – by just copy-pasting them together – and get something like this:

combined

Now, there's a third loop. If fishing increases, that reduces the amount of fish in the ocean, which reduces the supply of fish on the market, which increases the price of fish, which is a signal to increase fishing even more.

balancing loop + balancing loop = vicious cycle.


“Economics enthusiasts and ecology enthusiasts share an affliction. Conservatives think that the self-organizing properties of a market economy are a miracle that must not be messed with. Greens think that the self-organizing properties of ecologies are a miracle that must not be messed with.”

~ Stewart Brand, Whole Earth Discipline


This post, of course, isn't just about overfishing. It's about how Systems Thinking can take two different perspectives – even seemingly contradicting ones – and combine them into a better, more holistic understanding of the world!

To take another example, Parable of the Polygons combined two seemingly-contradicting ideas: 1) that very few people nowadays are explicitly sexist/racist, yet 2) statistics on employment & incarceration show clear biases in gender & race. But by using emergent behavior and complex systems, Vi & I could show that these two contradictory ideas can exist simultaneously. Could model-driven discourse be a good way to foster deliberative democracy???

Contradictions can be combined. That's a powerful idea we've forgotten, in this age of polarized politics.

I think this is because we visualize truth as a scale – à la Lady Justice – as if arguments can only be "for" or "against" a certain proposition, as if contradicting ideas can only be competitive and not constructive, as if knowledge was zero-sum. Maybe thinking in systems can help us come together, not just tolerating other perspectives, but actively inviting them. Maybe.

As a cliché phrase goes, “let's put aside our differences and come together.” Forget that. Let's bring along our differences, and come together anyway.

21 Mar 10:00

The statistics of Promissory Science. Part I: Making non-sense with statistical methods

by Ken Weiss
Statistics is a form of mathematics, a way devised by humans for representing abstract relationships. Mathematics comprises axiomatic systems, which make assumptions about basic units such as numbers; basic relationships like adding and subtracting; and rules of inference (deductive logic); and then elaborates these to draw conclusions that are typically too intricate to reason out in other less formal ways.  Mathematics is an awesomely powerful way of doing this abstract mental reasoning, but when applied to the real world it is only as true or precise as the correspondence between its assumptions and real-world entities or relationships. When that correspondence is high, mathematics is very precise indeed, a strong testament to the true orderliness of Nature.  But when the correspondence is not good, mathematical applications verge on fiction, and this occurs in many important applied areas of probability and statistics.

You can't drive without a license, but anyone with R or SAS can be a push-button scientist.  Anybody with a keyboard and some survey generating software can monkey around with asking people a bunch of questions and then 'analyze' the results. You can construct a complex, long, intricate, jargon-dense, expansive survey. You then choose who to subject to the survey--your 'sample'.  You can grace the results with the term 'data', implying true representation of the world, and be off and running.  Sample and survey designers may be intelligent, skilled, well-trained in survey design, and of wholly noble intent.  There's only one little problem: if the empirical fit is poor, much of what you do will be non-sense (and some of it nonsense).

Population sciences, including biomedical, evolutionary, social and political fields are experiencing an increasingly widely recognized crisis of credibility.  The fault is not in the statistical methods on which these fields heavily depend, but in the degree of fit (or not) to the assumptions--with the emphasis these days on the 'or not', and an often dismissal of the underlying issues in favor of a patina of technical, formalized results.  Every capable statistician knows this, but of course might be out of business if openly paying it enough attention. And many statisticians may be rather disinterested or too foggy in the philosophy of science to understand what goes beyond the methodological technicalities.  Jobs and journals depend on not being too self-critical.  And therein lie rather serious problems.

Promissory science
There is the problem of the problems--the problems we want to solve, such as in understanding the cause of disease so that we can do something about it.  When causal factors fit the assumptions, statistical or survey study methods work very well.  But when causation is far from fitting the assumptions, the impulse of the professional community seems mainly to increase the size, scale, cost, and duration of studies, rather than to slow down and rethink the question itself.  There may be plenty of careful attention paid to refining statistical design, but basically this stays safely within the boundaries of current methods and beliefs, and the need for research continuity.  It may be very understandable, because one can't just quickly uproot everything or order up deep new insights.  But it may be viewed as abuse of public trust as well as of the science itself.

The BBC Radio 4 program called More Or Less keeps a watchful eye on sociopolitical and scientific statistical claims, revealing what is really known (or not) about them.  Here is a recent installment on the efficacy (or believability, or neither) of dietary surveys.  And here is a FiveThirtyEight link to what was the basis of the podcast.

The promotion of statistical survey studies to assert fundamental discovery has been referred to as 'promissory science'.  We are barraged daily with promises that if we just invest in this or that Big Data study, we will put an end to all human ills.  It's a strategy, a tactic, and at least the top investigators are very well aware of it.  Big long-term studies are a way to secure reliable funding and to defer delivering on promises into the vague future.  The funding agencies, wanting to seem prudent and responsible to taxpayers with their resources, demand some 'societal impact' section on grant applications.  But there is in fact little if any accountability in this regard, so one can say they are essentially bureaucratic window-dressing exercises.

Promissory science is an old game, practiced since time immemorial by preachers.  It boils down to promising future bliss if you'll just pay up now.  We needn't be (totally) cynical about this.  When we set up a system that depends on public decisions about resources, we will get what we've got.  But having said that, let's take a look at what is a growing recognition of the problem, and some suggestions as to how to fix it--and whether even these are really the Emperor of promissory science dressed in less gaudy clothing.

A growing at least partial awareness
The problem of results that are announced by the media, journals, universities, and so on but that don't deliver the advertised promises is complex but widespread, in part because research has become so costly, that some warning sirens are sounding when it becomes clear that the promised goods are not being delivered.

One widely known issue is the lack of reporting of negative results, or their burial in minor journals. Drug-testing research is notorious for this under-reporting.  It's too bad because a negative result on a well-designed test is legitimately valuable and informative.  A concern, besides corporate secretiveness, is that if the cost is high, taxpayers or share-holders may tire of funding yet more negative studies.  Among other efforts, including by NIH, there is a formal attempt called AllTrials to rectify the under-reporting of drug trials, and this does seem at least to be thriving and growing if incomplete and not enforceable.  But this non-reporting problem has been written about so much that we won't deal with it here.

Instead, there is a different sort of problem.  The American Statistical Association has recently noted an important issue, which is the use and (often) misuse of p-values to support claims of identified  causation (we've written several posts in the past about these issues; search on 'p-value' if you're interested, and the post by Jim Wood is especially pertinent).  FiveThirtyEight has a good discussion of the p-value statement.

The usual interpretation is that p represents the probability that if there is in fact no causation by the test variable, that its apparent effect arose just by chance.  So if the observed p in a study is less than some arbitrary cutoff, such as 0.05, it means essentially that if no causation were involved the chance you'd see this association anyway is no greater than 5%; that is, there is some evidence for a causal connection.

Trashing p-values is becoming a new cottage industry!  Now JAMA is on the bandwagon, with an article that shows in a survey of biomedical literature from the past 25 years, including well over a million papers, a far disproportionate and increasing number of studies reported statistical significant results.  Here is the study on the JAMA web page, though it is not public domain yet.

Besides the apparent reporting bias, the JAMA study found that those papers generally failed to provide adequate fleshing out of that result.  Where are all the negative studies that statistical principles might expect to be found?  We don't see them, especially in the 'major' journals, as has been noted many times in recent years.  Just as importantly, authors often did not report confidence intervals or other measures of the degree of 'convincingness' that might illuminate the p-value. In a sense that means authors didn't say what range of effects is consistent with the data.  They report a non-random effect, but often didn't give the effect size, that is, say how large the effect was even assuming that effect was unusual enough to support a causal explanation. So, for example, a statistically significant increase of risk from 1% to 1.01% is trivial, even if one could accept all the assumptions of the sampling and analysis.

Another vocal critic of what's afoot is John Ionnides; in a recent article he levels both barrels against the misuse and mis- or over-representation of statistical results in biomedical sciences, including meta-analysis (the pooling of many diverse small studies into a single large analysis to gain sufficient statistical power to detect effects and test for their consistency).  This paper is a rant, but a well-deserved one, about how 'evidence-based' medicine has been 'hijacked' as he puts it.  The same must be said of  'precision genomic' or 'personalized' medicine, or 'Big Data', and other sorts of imitative sloganeering going on from many quarters who obviously see this sort of promissory science as what you have to do to get major funding.  We have set ourselves a professional trap, and it's hard to escape.  For example, the same author has been leading the charge against misrepresentative statistics for many years, and he and others have shown that the 'major' journals have in a sense the least reliable results in terms of their replicability.  But he's been raising these points in the same journals that he shows are culpable of the problem, rather than boycotting those journals.  We're in a trap!

These critiques of current statistical practice are the points getting most of the ink and e-ink.  There may be a lot of cover-ups of known issues, and even hypocrisy, in all of this, and perhaps more open or understandable tacit avoidance.  The industry (e.g., drug, statistics, and research equipment) has a vested interest in keeping the motor running.  Authors need to keep their careers on track.  And, in the fairest and non-political sense, the problems are severe.

But while these issues are real and must be openly addressed, I think the problems are much deeper. In a nutshell, I think they relate to the nature of mathematics relative to the real world, and the nature and importance of theory in science.  We'll discuss this tomorrow.
01 Nov 20:55

Is Frontiers a potential predatory publisher?

by Leonid Schneider
Kayle Sawyer

At first I thought that when Beall added Frontiers to his blacklist, "Well, there goes respect for Beall's List." Upon reading this article, I'm not so sure. The value that traditional journals have over blogs keeps diminishing with every year. In terms of sharing science, it's almost gone. In terms of CV padding, it's still holding strong. But are journals like Frontiers changing that?

The Lausanne-based publishing house Frontiers, founded by the neuroscientists Henry and Kamila Markram, has been recently added to the Beall’s List of potential, possible, or probable predatory scholarly open-access publishers. Was this decision justified? I wish to share here some of my recent investigations.Predatory publisher

Previously, I reported about an editorial conflict at the Frontiers medical section in Laborjournal and Lab Times. In May 2015 Frontiers sacked almost all of its medical chief editors. This was because those chief editors had signed a “Manifesto of Editorial Independence”, which went against one of the key guidelines of Frontiers, namely that editors must always “allow the authors an opportunity for a rebuttal”. Associate editors are namely instructed to always “consider the rebuttal of the authors”, even “if the independent reviews are unfavourable”.  At the same time, chief editors claimed to have had little, if any, influence over the editorial processes at Frontiers. Since the Frontiers Executive Editor Frederick Fenter fired all 31 signatory chief editors, Frontiers in Medicine has been operated without an Editor-in-Chief and with few Chief Specialty Editors. Medical ethics requirement for publication, originally introduced by the previous chief editors, were not implemented in the Frontiers instructions for authors. There appear to be few people in a position to provide oversight, while the associate editors handle manuscripts which they often receive directly from authors. Some of these associate editors are no strangers to controversy themselves; Alfredo Fusco, who is also a frequent author at Frontiers in Medicine, has had several of his papers retracted and is facing a criminal investigation over alleged data manipulations.

The Frontiers in Medicine “purge” led me to inquire into how Frontiers’ unique editorial model works in their other journals. What I learned is that even the associate editors often find their power limited: once a manuscript has been sent out for peer review, Frontiers editors have hardly any option to reject it. This may explain how controversial papers came to be published in Frontiers, e.g. one denying that HIV is the cause of AIDS, or another suggesting that vaccinations cause autism.

On the other hand, Frontiers is quite popular with many scientists and research organisations. How can a publisher which helped pioneer such innovations as open access and name-signed peer review, have come to this?

Frontiers’ story began in 2007, with the first journal Frontiers in Neuroscience. One of its very first accepted articles, before the new journal was officially accepting submissions, was a theory on the origins of autism by journal founders Kamila and Henry Markram. Since then, their Intense World Theory (formerly Intense World Syndrome) has been published in various Frontiers neuroscience-related journals.  There, the Markrams’ COI statement always proclaims “the absence of any commercial or financial relationships that could be construed as a potential conflict of interest”. Yet these two authors have an apparent ownership interest in the journals they publish. Henry Markram is listed as Co-Founder & Editor-in-Chief at Frontiers, Kamila Markram as Co-Founder & CEO. The mass sacking of medical chief editors suggests, they might be in a position to decide on the employment and remuneration of their editors.

Meanwhile, Frontiers manages according to own website “54 open access journals, 55,000 editors, 38,000 articles”. Some Frontiers editors I communicated with were quite content with the publisher. Anne Simon, professor at the University of Maryland in the USA (and one of the whistle-blowers in the case of Olivier Voinnet, which I have been covering for Lab Times), is also Editor-in Chief (EiC) of the journal Frontiers In Virology. She describes her experience as “extremely positive“. Unlike the medical chief editors, Simon says she was never was left in the dark about submitted manuscripts or witnessed their inappropriate handling by associate editors or reviewers.

Simon explained to me in an email that she sees the Editor-in-Chief as

“the next point of contact for editors who are having problems handling a manuscript or needing advice, and authors, who may be upset with decisions and want to contact someone other than the editor who handled the manuscript”.

She added:

“we are frequently called upon to politely nudge late reviewers, when the editor and journal managers have been unsuccessful, or if there is an editor who is slow in the review process”. Maybe this is why Frontiers in Virology is one of the best cited Frontiers journals, because the chief editors are free to do their jobs? Simons clarifies: “Most journals can operate smoothly without EiC most of the time. But when something comes up, (…) then the EiC is a critical part of the journal for making decisions about exceptions to journal “rules” and dealing with papers that have possible ethical issues”.

Apparently Frontiers in Medicine can operate without an Editor-in Chief, and indeed it has done for months now. But what about the ethical duties Simon was mentioning?

Matthias Barton, cardiology professor at the University of Zurich and former EiC of Frontiers in Medicine and Frontiers in Cardiovascular Medicine, told me that when he and his fellow editors were sacked, their ethical policies were also shown the door. New medical ethics guidelines, which he and his colleagues had established to preserve clinical safety and patient protection, were revoked. For example, Barton and colleagues stipulated that “For each manuscript submitted, every author needs to electronically complete and sign the COI form provided by ICMJE [International Committee of Medical Journal Editors], and all completed COI forms need to be submitted with the manuscript”.

Today, however, there is no requirement or even option for every author to provide a signed COI statement at Frontiers in Medicine, despite ICMJE guidelines. Instead, the corresponding author simply has to make one click to verify COI status on behalf of others.

Another example of the post-purge reform: Frontiers does not distinguish in their section “Case Reports” between human and animal subjects anymore. The guidelines for manuscript submission are the same for both. No mention is made that human patient identity must be specifically respected and protected, in fact the new Frontiers guidelines there are same as for horses and cattle. The previous definition of the “Case Report”, as written by the now absent editors, was focused on human patients only and included demands such as: “Manuscripts must not include any information that allows identification of the patient. This includes, but is not limited to, names, initials, and hospital information” as well as “as anonymity cannot be guaranteed by simply covering the eye area with a black bar, the patient, parent, or guardian must be shown the photograph intended for publication, provide informed consent for its publication, and be informed by the authors that the image will be visible on the internet”. For Frontiers in Medicine, these rules are now a thing of the past.

Simon also stated:

“Having a scientist as EiC who is in the same [research-] field as the journal is important for making informed decisions”.

However, this does not appear to be the case for the new head of Frontiers in Cardiovascular Medicine. After the editorial purge, the journal received its new EiC, Hendrik Tevaearai Stahel, professor of Cardiovascular Surgery at the Inselspital in Bern, Switzerland. Cardiovascular medicine is a branch of internal medicine and requires an utterly different medical specialization than cardiac surgery. A heart surgeon cannot replace a cardiovascular internist. Dr. Tevaearai Stahel’s CV is rather inconclusive in the area of cardiovascular medicine than one would anticipate for the EiC of this journal.

Frontiers’ philosophy is to give all authors a chance to publish their work in one of their journals. In basic science, this is, to a degree, a laudable approach indeed. Many scientists convincingly argue that every single research study should be published and judged by the scrutiny of scientist colleagues in post-publication peer review. Yet this option is not available at Frontiers, and while the reviewers are named, their peer review reports are kept confidential. This concept to publish almost every manuscript, while keeping the peer review process rather opaque, has possibly contributed to the recent placing of the publisher Frontiers on the Beall’s List.

With medical studies, which go beyond laboratory experiments, the issue of proper editorial process is even more serious. Doctors adjust their patient treatments according to recent developments and publications in their field. This is why there are strict ethical rules and quality guidelines for clinically relevant medical publications, as issued by the ICMJE and the World Association of Medical Editors (WAME). Therefore, there can be many reasons for a submitted manuscript to be rejected. However, at Frontiers, the rejection option is not always available. Generally, a peer reviewer can only withdraw from the peer review; recommending rejection is not an available option. If a reviewer does withdraw, the handling editor is automatically prompted to find a replacement reviewer. Theoretically, this can go on back and forth until two positive peer reviews are finally obtained. Occasionally, associate editors skip the search for willing reviewers altogether and perform the peer review themselves.

Tamas Szakmany, honorary senior lecturer in intensive care medicine at the Cardiff University in UK, reports of his experience as a reviewer for Frontiers in Medicine:

“The piece in question was lacking very basic aspects of a scientific manuscript and the authors failed to make any amends. I made it very clear at the first response to the authors that the paper was unacceptable in this format and although they made some small changes, they did not address any of my major comments. The subsequent rounds of “revisions” were getting nowhere and as there was no option for me to reject the manuscript in the online review system and the Editor couldn’t make this decision as he was forced to give further “chances” for improvement, I felt that I had no other option than to withdraw from the process as the authors were clearly not willing to understand”.

Szakmany summarizes:

“From a reviewer point, there is no opportunity to reject a paper, only to endorse or ask for further revisions”.

The specialty chief editor responsible for the above-mentioned Szakmany-reviewed manuscript was Zsolt Molnár, professor for intensive care medicine at the University of Szeged in Hungary. Molnár was among the signatories of the editorial Manifesto, which resulted in his removal together with 30 other chief editors. While still in his post, Molnár protested about the unrejectable manuscript to the Frontiers in Medicine “Editorial Office” – actually a publisher-run department outside of any academic editor control. He received a reply from the journal manager who explained:

“once a paper is sent for peer-review, we want to give the authors the chance to discuss with the reviewers in the interactive review stage. You can always reject a manuscript BEFORE [caps in the original] sending it to reviewers/review editors”.

Yet just in the previous sentence, the journal manager also explained:

“Regarding rejecting before interactive review: the reason we strongly discourage this is because Frontiers wishes to overcome one of the common concerns that authors have – that the editors have overruled their chance to discuss their paper with the reviewers”.

This sounds somewhat like a Catch-22 situation, in which the very act of sending out a paper for peer review precludes the ability to reject this paper on the basis of the review, should it turn out negative.

The resulting high acceptance rate at Frontiers goes hand-in-hand with the fact that the publisher has offered its chief editors a reward of €5,000 “for each batch of 120 papers submitted to your section in 2015”.

Yet under certain conditions, Frontiers has no problems with rejections at all, even of positively reviewed manuscripts. Lydia Maniatis, formerly adjunct psychologist at the American University in Washington DC, had such an experience. She submitted a rather critical Commentary (a publication type generally published by Frontiers free of charge) on a certain Frontiers in Human Neuroscience article which dealt with visual shape perception. Her manuscript was assigned an associate editor, but soon rejected. The reason was: despite one endorsing review, another reviewer chose to wordlessly withdraw. No specific criticisms from this reluctant reviewer were forwarded to Maniatis. No replacement reviewer was appointed, despite Maniatis’ many requests. Instead, the associate editor reviewed the manuscript himself, despite being a child psychologist and autism specialist rather outside the field. He decreed that Maniatis’ revised manuscript was “not adequate and lacked clarity and focus”, without providing any further explanations. With the support of the journal’s Chief Editor, the rejection was final. Maniatis later published her criticisms on PubPeer and PubMed Commons and was finally able to engage with the authors of the paper.

After Frontiers was listed as a potential predatory publisher, Nature News has reported on the scientists’ protests about this addition to the “controversial ‘Beall’s List’”. The Nature Publishing Group (NPG) is owned by the German publishing house Holtzbrinck Publishing Group, which is also the partial owner of Frontiers. Indeed, as I reported for Laborjournal and Lab Times, NPG became a major stakeholder in Frontiers, publicly much celebrated by both publishers. Then, at the beginning of 2015, a break came. NPG representatives have left the Frontiers board, with Henry Markram taking over their duties. The current administrative board lists the Markrams, some Frontiers employees, the reviewing board member PricewaterhouseCoopers (USA), a representative from the private equity firm CVC Capital Partners (Luxembourg), and Michael Brockhaus, Head of Group Strategy at the Holtzbrinck Publishing Group. I have reached out to Brockhaus, through his personal assistant, for a comment on the nature of Holtzbrinck’s financial involvement with Frontiers, yet received no reply.

One could assume that NPG has sold or withdrawn their investment in Frontiers, however one fact suggests that there has not been a total financial divorce: Nature journals keep advertising for the Frontiers-owned academic social network, Loop, by posting links to authors’ Loop profiles (which are created automatically for all Frontiers authors) on their article websites. Certain editors told me that they did not succeed in having their Loop account fully deleted.

Loop may help Frontiers and NPG scientists to connect, but not every account belongs to a bona fide user. The network contains a number of obviously inappropriate or bogus accounts, and Frontiers has been informed by then-EiC Barton about certain questionable Loop profiles. Some, such as the profile “Isha FB1 TEST Jan” (whose only content was a photo of a pornographic film actress) were removed, but others remain active: an Indian “Genius Mind”, a US professor by the name of “Alpha Shred”, a teenage professor from Macedonia, a Chinese senior researcher “Eagle Eagle Jg”, a student of a geographically bizarre “Amedeo Avogadro University of Eastern Piedmont” in Lebanon, and finally a US based CEO called “mis souri” whose speciality is the  “wonderful sport of duck hunting”. Frontiers thanked Barton in January 2015 for sharing the information on these strange researcher profiles, but has yet to remove them.

Regardless their publishing and editorial policies, Frontiers journals have recently joined the Committee on Publication Ethics (COPE) en masse. Coincidently or not, prior to this the Frontiers journal manager Mirjam Curno joined the committee as council member. While most other journals list their editors-in-chief as their COPE contact, none of the listed Frontiers journals does. Instead, their COPE contacts are exclusively the employees of the publisher, working in managerial capacities – and not involved in the editorial process of the journals. Some of these employees have little experience in the research fields they are now supervising. One is a former earth scientist, now in charge of veterinary science, neurology and psychiatry. Another studied English and Croatian at university but is now an oncology, endocrinology and public health specialist. Yet another, who supervises several Frontiers life science journals despite having studied earth sciences, has no PhD. In fact, a number of Frontiers journal managers carry no academic credentials beyond a bachelor’s degree, in a field unrelated to their Frontiers duties. All of this would not necessarily be a problem if these managers were assisting and answering to the senior academic editors of their respective journals. Instead, as the sacked medical chief editors have experienced, these journal managers interfered with the editorial process, by occasionally advising these editors to keep recruiting further reviewers or dissuading them from rejecting a manuscript.

Editorial independence, free from the meddling of the owner and publisher is a key principle of good editorial practice in science publishing, as stipulated by highly respected organizations such as ICMJE and COPE. Shortly after sacking its editors, Frontiers listed its medical and other journals as “Following the ICMJE Recommendations” and, as mentioned, became member of COPE. These events however do not mean that Frontiers is bound to change its internal policies. Why? Simply because both organizations seem not to mind when those who publicly subscribe to their rules don’t actually consider adhering to them.

More on this soon.

The author wishes to thank NS, RP, PSB, SC and JB for their critical comments on this text.

28.10.2015: The institutional affiliation of Lydia Maniatis has been corrected -LS

06.11.2015: Two journals, which list their EiCs as COPE contacts, were erroneously attributed to Frontiers Media. The reader “MH” has pointed out the mistake in a comment below. This text correction means that not a single Frontiers journal lists its chief editor as COPE contact. -LS


28 Sep 15:26

Neurotic Neurons: Simplifications

by Nicky Case
Neurotic Neurons: Simplifications

Neurotic Neurons: Simplifications

My most recent interactive project, Neurotic Neurons, has a lot of simplifications. I think this can be a good thing — a street map is useful not just despite simplifying the city, but because it simplifies the city. Likewise, this model throws away details that may distract from learning about the core principles of anxiety and therapy.

Nevertheregardless, for the sake of intellectual honesty, here's everything I know I lied about, why I simplified them the way I did, and what I know I don't even know.

1. Thoughts don’t live in individual neurons.

Well, duh.

You have no "dog" neuron or "pain" neuron. I was using those phrases as linguistic shorthand because explaining the world through symbolic spoken language is like eating a steak through a straw.

I wish I didn't have to explicitly debunk the idea that thoughts live inside individual neurons, (to be fair, nobody knows how thoughts emerge from neural connections, they just do) but considering crappy popsci like this, and the fact so many people still believe that “we only use 10% of our brain”, I just gotta cover all my bases here.

2. Neurons aren't deterministic.

One neuron doesn't directly cause the next connected neuron to fire — instead, it raises (or lowers) the next neuron's action potential, merely making it more likely (or less likely) to fire. It's stochastic, that is, sorta random.

However, adding unpredictability to a model makes it harder to learn, and if I could get away without it, that's fine. My goal with Neurons was teachin' peeps the general gist, rather than specific details.

However, making my model deterministic meant I had to add another inaccuracy:

3. Neural signals don't have varying strengths.

All neural electrical signals have more-or-less the exact same voltage. What real neurons do to vary the intensity of a nerve signal is changing the frequency of signals. Getting poked sends a few signals per millisecond, getting punched sends a lot more signals per millisecond.

Yup, higher signal frequency sure hertz.

(bad pun face)

Anyway, since I made my model deterministic, I couldn't use probability to make signals "die out". So, I pretended that signals get weaker the more they're passed down from neuron to neuron, and that only the strongest signals trigger the Hebbian & Anti-Hebbian learning rules.

Speaking of which...

4. Hebbian & Anti-Hebbian Learning is... actually pretty good, but sort of incomplete.

Hebbian & Anti-Hebbian Learning, even though proposed long before we knew much about the anatomy of neurons, turned out to have a solid biological basis! It's called spike-timing-dependent plasticity (STDP), one of many things I don't actually know deeply about.

Anyway, the Hebb & Anti-Hebb rules also explains a lot of the weird results from classical conditioning experiments, even explaining why backwards conditioning doesn't work, and extinction of conditioned responses.

What those rules don't explain, however, is the Zero Contingency Procedure. (Given a connection A→B, firing B without A weakens the connection, since A no longer "predicts" B.) I don't know what model explains that, which brings me to my final bulletpoint:

5. Everything I Know I Don't Know

As I just mentioned, I don't know what model explains the Zero Contingency Procedure? Maybe some variant on the Anti-Hebbian rule?

Also, other than the most basic details I gleamed from Crash Course, I don't know much about neural anatomy, especially not the actual chemical mechanism by which Hebb/Anti-Hebb/STDP works.

Furthermoreover, I don't know where or how inhibitory neural connections come into play. Because... that's a connection from neuron A to neuron B, where A anti-predicts B? If A fires, make B firing less likely. Not sure how that kind of connection gets learnt.

And finally, while it would be nice... for obvious reasons, I can't list what I don't know I don't know.

28 Feb 00:19

Psych journal bans significance tests; stat blogger inundated with emails

by Andrew

OK, it’s been a busy email day.

From Brandon Nakawaki:

I know your blog is perpetually backlogged by a few months, but I thought I’d forward this to you in case it hadn’t hit your inbox yet. A journal called Basic and Applied Social Psychology is banning null hypothesis significance testing in favor of descriptive statistics. They also express some skepticism of Bayesian approaches, but are not taking any action for or against it at this time (though the editor appears opposed to the use of noninformative priors).

From Joseph Bulbulia:

I wonder what you think about the BASP’s decision to ban “all vestiges of NHSTP (P-values, t-values, F-values, statements about “significant” differences or lack thereof and so on)”?

As a corrective to the current state of affairs in psychology, I’m all for bold moves. And the emphasis on descriptive statistics seems reasonable enough — even if more emphasis could have placed on visualising the data, more warnings could have been issued around the perils of un-modelled data, and more value could have been placed on obtaining quality data (as well as quantity).

My major concern, though, centres on the author’s timidness about Bayesian data analysis. Sure, not every Bayesian analysis deserves to count as a contribution, but nor is it the case that Bayesian methods should be displaced while descriptive methods are given centre stage. We learn by subjecting our beliefs to evidence. Bayesian modelling merely systematises this basic principle, so that adjustments to belief/doubt are explicit.

From Alex Volfovsky:

I just saw this editorial from Basic and Applied Social Psychology: http://www.tandfonline.com/doi/pdf/10.1080/01973533.2015.1012991

Seems to be a somewhat harsh take on the question though gets at the frequently arbitrary choice of “p

From Jeremy Fox:

Psychology journal bans inferential statistics: As best I can tell, they seem to have decided that all statistical inferences from sample to population are inappropriate.

From Michael Grosskopf:

I thought you might find this interesting if you hadn’t seen it yet. I imagine it is mostly the case of a small journal trying to make a name for itself (I know nothing of the journal offhand), but still is interesting.

http://www.tandfonline.com/doi/pdf/10.1080/01973533.2015.1012991

From the Reddit comments on a thread that led me to the article:
“They don’t want frequentist approaches because you don’t get a posterior, and they don’t want Bayesian approaches because you don’t actually know the prior.”

http://www.reddit.com/r/statistics/comments/2wy414/social_psychology_journal_bans_null_hypothesis/

From John Transue:

Null Hypothesis Testing BANNED from Psychology Journal: This will be interesting.

From Dominik Papies:

I assume that you are aware of this news, but just in case you haven’t heard, one journal from psychology issued a ban on NHST (see editorial, attached). While I think that this is a bold move that may shake things up nicely, I feel that they may be overshooting, as not the technique per se, but rather its use seems the real problem to me. The editors also state they will put more emphasis on sample size and effect size, which sounds like good news.

From Zach Weller:

One of my fellow graduate students pointed me to this article (posted below) in the Basic and Applied Social Psychology (BASP) journal. The article announces that hypothesis testing is now banned from BASP because the procedure is “invalid”. Unfortunately, this has caused my colleague’s students to lose motivation for learning statistics. . . .

From Amy Cohen:

From the Basic and Applied Social Psychology editorial this month:

The Basic and Applied Social Psychology (BASP) 2014 Editorial emphasized that the null hypothesis significance testing procedure (NHSTP) is invalid, and thus authors would be not required to perform it (Trafimow, 2014). However, to allow authors a grace period, the Editorial stopped short of actually banning the NHSTP. The purpose of the present Editorial is to announce that the grace period is over. From now on, BASP is banning the NHSTP. With the banning of the NHSTP from BASP, what are the implications for authors?

From Daljit Dhadwal:

You may already have seen this, but I thought you could blog about this: the journal “Basic and Applied Social Psychology” is banning most types of inferential statistics (p-values, confidence intervals, etc.).

Here’s the link to the editorial:

http://www.tandfonline.com/doi/full/10.1080/01973533.2015.1012991

John Kruschke blogged about it as well:

http://doingbayesiandataanalysis.blogspot.ca/2015/02/journal-bans-null-hypothesis.html

The comments on Kruschke’s blog are interesting too.

OK, ok, I’ll take a look. The editorial article in question is by David Trafimow and Michael Marks. Krushke points out this quote from the piece:

The usual problem with Bayesian procedures is that they depend on some sort of Laplacian assumption to generate numbers where none exist. The Laplacian assumption is that when in a state of ignorance, the research should assign an equal probability to each possibility.

Huh? This seems a bit odd to me, given that I just about always work on continuous problems, so that the “possibilities” can’t be counted and it is meaningless to talk about assigning probabilities to each of them. And the bit about “generating numbers where none exist” seems to reflect a misunderstanding of the distinction between a distribution (which reflects uncertainty) and data (which are specific). You don’t want to deterministically impute numbers where the data don’t exist, but it’s ok to assign a distribution to reflect your uncertainty about such numbers. It’s what we always do when we do forecasting; the only thing special about Bayesian analysis is that it applies the principles of forecasting to all unknowns in a problem.

I was amused to see that, when they were looking for an example where Bayesian inference is OK, they used a book by R. A. Fisher!

Trafimow and Marks conclude:

Some might view the NHSTP [null hypothesis significance testing procedure] ban as indicating that it will be easier to publish in BASP [Basic and Applied Social Psychology], or that less rigorous manuscripts will be acceptable. This is not so. On the contrary, we believe that the p

I’m with them on that. Actually, I think standard errors, p-values, and confidence intervals can be very helpful in research when considered as convenient parts of a data analysis (see chapter 2 of ARM for some examples). Standard errors etc. are helpful in giving a lower bound on uncertainty. The problem comes when they’re considered as the culmination of the analysis, as if “p less than .05″ represents some kind of proof of something. I do like the idea of requiring that research claims stand on their own without requiring the (often spurious) support of p-values.

The post Psych journal bans significance tests; stat blogger inundated with emails appeared first on Statistical Modeling, Causal Inference, and Social Science.

09 Feb 19:36

Discussion with Steven Pinker connecting cognitive psychology research to the difficulties of writing

by Andrew

Following up on my discussion of Steven Pinker’s writing advice, Pinker and I had an email exchange that cleared up some issues and raised some new ones.

In particular, Pinker made a connection between the difficulty of writing and some research findings in cognitive psychology. I think this connection is really cool—I’ve been thinking and writing about writing for awhile, now, but I’e never really seen the connection to psychology research. So I wanted to share this with you.

Pinker’s remarks came at the end of an email exchange. I’ll share the earlier messages to give the background, but by far the most interesting part is what Pinker said, so I’ll give that right away.

Here’s Pinker, discussing the difficulty of communicating complex ideas (in particular, in academic writing we typically aren’t just making the case for position B, we’re also arguing why previous position A, reasonable as it might sound, is not correct):

I address it in part in chapter 5 of The Sense of Style in discussing our comprehension of negation. The human mind cannot represent a proposition without a truth value – to think “X” is to think “X is true,” at least temporarily. Negation requires an extra mental step—which can easily fail when the person is overloaded or distracted. A number of systematic kinds of error and difficulty follow. My colleague Dan Gilbert has an insightful review of this literature in his 1991 article, “How Mental Systems Believe.”

On top of that people’s comprehension is often driven more by expectations than the literal content of the text (again, particularly when not paying close attention). When I see certain misinterpretations I often think of the puzzling neurospsychological syndrome called “deep dyslexia.” Surface dyslexia consists of misreadings based on alphabetic confusions – misreading “pear” as “bear,” for example. In deep dyslexia, the patient might misread “pear” as “apple.” The puzzle is that if the patient’s word-recognition system could parse the letters well enough to realize it referred to a fruit, it must have been because he matched the input with a stored template for “pear” – so why didn’t he successfully read it as “pear”? Presumably such patients’ semantic representations (the definitions in their mental dictionary) were so degraded that the word merely activated a coarse semantic ballpark, without enough precision to pinpoint the exact entry. From there sheer base-rate frequency determines the output. It’s a crude analogy to the way we non-brain-damaged people often parse a sentence coarsely enough to remind us of a semantic neighborhood and then we fill in the rest from base rates. Often a writer will have to anticipate this and explicitly disavow an expected confusion: “You probably think I mean this, but I really mean that.”

I’ve been trying to do this more often, for example in this 2011 article on the philosophy of Bayesian statistics, where I’m pretty explicit about what I’m disagreeing with (for example, the first section of this paper is entitled, “The Standard View of the Philosophy of Statistics, and Its Malign Influence on Statistical Practice”).

Still, I think the necessity of clearing-away-the-old creates an additional degree of difficulty in much of academic writing.

What I found exciting in Pinker’s note above was his connection of this vague idea, which I arrived at by introspection, to research in cognition.

Background

And here’s how we got there.

Our conversation started with Pinker reacting to my statement that it’s not so remarkable that academics don’t in general write well; after all, writing is hard.

Pinker wrote:

The paradox is that academics might be expected to do better than laypeople in their writing, since the communication of ideas is part of their job description. So if academics are no better than laypeople at writing, that raises a puzzle. And . . . academics are not just average writers – academic prose is often, notoriously, far worse than nonprofessional writing.

Pinker also pointed out a place where I’d misread what he’d written even though he’d taken “a great deal of care in crafting the sentence” (as he put it).

This was interesting and reflected experiences I’ve had, where I try so so hard to be clear, and people still come up with a misreading of my prose.

I reflected on this and replied to Pinker as follows:

My misreading of your article raises an interesting meta-issue about writing which might interest you, as I think it’s also relevant to the puzzle of why academic writing is often so bad, despite the fact that academics typically get a lot of practice at it.

What happens to me a lot in writing, whether it’s a scholarly article, a textbook, or a popular article, is that I am aware of a possible misunderstanding, and I carefully craft my sentences to avoid the possible pitfall—but people fall into it anyway! For example, when writing about significance testing, I am careful to avoid saying that the p-value is the probability the null hypothesis is true, so I use a very careful wording, but then people read my writing as if I’d said that wrong thing. I realize that in such cases it’s not enough to say it right and avoid the error; I really need to explicitly state that I’m not saying that other thing that people are expecting to hear. (Which may be what happened in the cases where I misread you; I read a sentence on a certain topic, was expecting to hear a certain thing (“professors have a foreign policy” or whatever), and then I heard it, misprocessing the incoming words. So in my more recent writing I’ve tried harder (not always with success) to not just avoid making an error but also to making clear to the reader where the error arises.

Anyway, back to academic writing. Perhaps academics are often in this situation: trying to explain an idea. while also explaining why a certain natural-seeming interpretation is not correct. Which means that, to be most effective, we have to both convey our idea and also convey the idea that we think is wrong. I think this is an inherently difficult task (and I suspect you’ll agree on this, as I know that a lot of your writing has this feature, that you explain the appeal of the wrong model in the context of explaining your own ideas). So . . . perhaps one contribution to the ugliness of academic writing is that academic writing, to be compelling, often has to do both these tasks, and that’s not easy.

And all this is, perhaps, especially true when academics are writing about topics of general interest (which is, of course, the time that outsiders are likely to encounter our work. A nonstatistician might well read my article on the statistical crisis in science; such an outsider is not so likely to read one of my articles that’s full of math): in our general-interest articles we’re often explaining how a certain common idea is actually wrong.

It was in response to this message that Pinker sent the note given above, with the connection to psychology research.

The post Discussion with Steven Pinker connecting cognitive psychology research to the difficulties of writing appeared first on Statistical Modeling, Causal Inference, and Social Science.

09 Feb 18:57

Professor ratings by gender and discipline

by Nathan Yau

Professor ratings

Based on about 14 million reviews on RateMyProfessor, this tool by Ben Schmidt lets you compare words used to describe professors, categorized by gender and discipline. For example, the above is the usage rate of "smart" in reviews, and you see lower rates with professors who are women than for men, for every discipline. This is true when you look at all reviews at once, just positive ones, or just negative ones.

Telling. And just the beginning. Do your own search and find out more about the data and models on Schmidt's FAQ.

Tags: education, gender

06 Feb 00:40

Vaccination rate and measles outbreak simulation

by Nathan Yau

Vax rate and infection simulation

You've probably heard about herd immunity by now. Vaccinations help the individual and the community, especially those who are unable to receive vaccinations for various reasons. The Guardian simulated what happens at various vaccination rates.

Luckily, the measles vaccine — administered in the form of the MMR for measles, mumps and rubella — is very effective. If delivered fully (two doses), it will protect 99% of people against the disease. But, like all vaccines, it’s not perfect: 1% of cases are likely to result in vaccine failure, meaning recipients won’t develop an immune response to the given disease, leaving them vulnerable. Even with perfect vaccination, one of every 100 people would be susceptible to measles, but that’s much better than the alternative.

If you're still unsure, please consult this flowchart to decide.

Tags: Guardian, measles, simulation, vaccination

30 Jan 19:15

Chances that a drug treatment helps

by Nathan Yau

Treatment odds

It's a common belief that if someone has a medical condition, a patient can take a treatment and the condition gets better or goes away. That is, improvement is directly related to intake. However, as it turns out, there's often a good chance the patient would have gotten better without the treatment. There's also a chance a treatment does nothing.

Austin Frakt and Aaron E. Carroll for the Upshot describe these chances through a metric called number needed to treat, or N.N.T. The simple animations throughout the article provide a great dose of perspective to the odds.

Tags: drugs, health, Upshot

15 Jan 02:42

One animated art piece per day, with D3.js

by Nathan Yau

D3js art

Data-Driven Documents, or D3.js, is a flexible JavaScript library that lets you draw and move things in the browser. If it isn't yet, it's on its way to becoming the tool of choice for visualization on the web. It's important to remember though that the library isn't just for charts, graphs, and maps (although those things are nice). Case-in-point is John Firebaugh's new project to make one animated art piece a day.

Playful, mesmerizing, and a total time suck.

Tags: animation, d3js

09 Nov 00:23

Why broken sleep is a golden time for creativity

by Karen Emslie

Photo by Michael Lewis/Gallery Stock

It is 4.18am. In the fireplace, where logs burned, there are now orange lumps that will soon be ash. Orion the Hunter is above the hill. Taurus, a sparkling V, is directly overhead, pointing to the Seven Sisters. Sirius, one of Orion’s heel dogs, is pumping red-blue-violet, like a galactic disco ball. As the night […]

The post Broken sleep appeared first on Aeon Magazine.

13 Oct 16:20

Rational != Self-interested

by Andrew
Kayle Sawyer

This is an excellent clarification of some of the fundamental terms in economics. In my experience, most economists who define all behavior as rational do so to emphasize that they are not interested in what internal psychological factors influence how people make the choices they do, but instead want to focus on what extrinsic factors influence peoples' choices. It's absolutely a gray zone, and more clear semantic definitions (as in this post) would allow for better research.

agent_man

I’ve said it before (along with Aaron Edlin and Noah Kaplan) and I’ll say it again. Rationality and self-interest are two dimensions of behavior. An action can be:
1. Rational and self-interested
2. Irrational and self-interested
3. Rational and altruistic
4. Irrational and altruistic.
It’s easy enough to come up with examples of all of these.

Before going on, let me just quickly deal with three issues that sometimes come up:
– Yes, these are really continuous scales, not binary.
– Sure, you can tautologically define all behavior as “rational” in that everything is done for some reason. But such an all-encompassing definition is not particularly interesting as it it drains all meaning from the term.
– Similarly, if you want you can tautologically define all behavior as self-interested, in the sense that if you do something nice for others that does not benefit yourself (for example, donate a kidney to some stranger), you must be doing it because you want to, so that’s self-interested. But, as I wrote a few years ago, the challenge in all such arguments is to avoid circularity. If selfishness means maximizing utility, and if we always maximize utility (by definition, otherwise it isn’t our utility, right?), then we’re always selfish. But then that’s like, if everything in the world is the color red, would we have a word for “red” at all? I’m using self-interested in the more usual sense of giving instrumental benefits.

To put it another way, if “selfish” means utility-maximization, which by definition is always being done (possibly to the extent of being second-order rational by rationally deciding not to spend the time to exactly optimize our utility function), then everything is selfish. Then let’s define a new term, “selfish2,” to represent behavior that benefits ourselves instrumentally without concern for the happiness of others. Then my point is that rationality is not the same as selfish2.

What’s new here?

The above is all background. It came to mind after I read this recent post by Rajiv Sethi regarding agent-based models. Sethi quotes Chris House who wrote:

The reason that economists set up their theories this way – by making assumptions about goals and then drawing conclusions about behavior – is that they are following in the central tradition of all of economics, namely that allocations and decisions and choices are guided by self-interest. This goes all the way back to Adam Smith and it’s the organizing philosophy of all economics. Decisions and actions in such an environment are all made with an eye towards achieving some goal or some objective. For consumers this is typically utility maximization – a purely subjective assessment of well-being. For firms, the objective is typically profit maximization. This is exactly where rationality enters into economics. Rationality means that the “agents” that inhabit an economic system make choices based on their own preferences.

No no no no no. Self-interest is the end, rationality is the means. You can pursue non-self-interested goals in rational or irrational ways, and you can pursue self-interested goals in rational or irrational ways.

Sethi’s post is about the relevance of agent-based models (as indicated in the above YouTube clip) to the study of economics and finance, and is worth reading on its own terms. But it also reminds me of the general point that we should not melange rationality with self-interest. I can see the appeal of such a confusion, as it seems to be associated with a seemingly hard-headed, objective view of the world. But really it’s an oversimplification that can lead to lots of confusion.

P.S. House’s blog is subtitled, “Economics, chess and anything else on my mind.” This got me interested so I entered “chess” into the search box but all that came out was this, which isn’t about chess at all. So that was a disappointment.

P.P.S. Some commenters asked for examples so I added some in comments. I’ll repeat them here.

First, the real-life example:

Some students in my class are designing and building a program to display inferences from Stan. They are focused on others’ preferences; they want to make a program that works for others, for the various populations of users out there. And they are trying to achieve this goal in a rational way.

Second, the quick examples:

Rational and self-interested: investing one’s personal money in index funds based on a judgment that this is the savvy way to long-term financial reward.

Rational and non-self-interested: donating thousands of dollars to a charity recommended by GiveWell.

Irrational and self-interested: day trading based on tips you find on sucker-oriented websites and gradually losing your assets in commission fees.

Irrational and non-self-interested: rushing into a burning building, risking your life to save your pet goldfish that was gonna die in 2 days anyway.

You could argue about the details of any of these examples but the point is that rationality is about the means and self-interest is about the ends.

The post Rational != Self-interested appeared first on Statistical Modeling, Causal Inference, and Social Science.

10 Oct 19:57

Ebola spreading, a simulation

by Nathan Yau

Ebola compared to other diseases

As a way to understand the deadliness and spread of Ebola, the Washington Post runs a simplified simulation of how long it's likely to take for the virus to infect 100 unvaccinated people. The simulation runs alongside several other diseases for comparison, which provides the main takeaway: Ebola is much more deadly than the other listed diseases, but it spreads much slower.

Tags: ebola, Washington Post

25 Sep 20:08

09/22/14 PHD comic: 'Google Search Suggestions'

Piled Higher & Deeper by Jorge Cham
www.phdcomics.com
Click on the title below to read the comic
title: "Google Search Suggestions" - originally published 9/22/2014

For the latest news in PHD Comics, CLICK HERE!

05 Sep 15:16

The Dying Russians

by editors
Kayle Sawyer

The hypothesis is that Russians are dying due to psychological stress and hopelessness.

A 15-year-old Russian has a shorter life expectancy than a peer in Bangladesh, Cambodia, or Yemen.

[Full Story]
14 Aug 00:05

Cultural history via where notable people died

by Nathan Yau

A group of researchers used where "notable individuals" were born and place of death, based on data from Freebase, as a lens into culture history. The video explainer below shows some results:

From Nature:

The team used those data to create a movie that starts in 600 bc and ends in 2012. Each person's birth place appears on a map of the world as a blue dot and their death as a red dot. The result is a way to visualize cultural history — as a city becomes more important, more notable people die there.

Before you jump to too many conclusions, keep in mind where the data comes from. Freebase is kind of like Wikipedia for data, so you get cultural bias towards the United States and Europe. There are fewer data points just about everywhere else.

Therefore, avoid the inclination to think that such and such city or country looks unimportant, focus on the data that's there and compare to what else is in the vicinity. From this angle, this is interesting stuff. [Science via Nature | Thanks, Mauro]

11 Aug 18:31

New “open” licenses aren’t so open

by carlosmonterrey
Kayle Sawyer

Why is STM making a fake open license, when CC is perfectly good? What is their motive? Do they have a financial incentive that is encouraging this mild corruption, or are they just misguided?

Open access image from the Public Library of Science.

Wikimedians have long been excited by the growth of the Open Access scholarship movement. Open Access scholarship has made vast amounts of images, video, and data available to the entire world, and in the process enriched Wikimedia projects as well. For example, over 2,000 images from the open access Public Library of Science are used in Wikipedia articles, including the adorable Brookesia micra to the right.

Unfortunately, some participants in scholarly publishing would like to water down “open access”, so that they get credit for being “open” while continuing to charge the public for access to knowledge. Today, we join fifty-five other open access groups in protesting the latest such attempt – publication of new “open” licenses that aren’t actually open.

These new licenses were written by the International Association of Scientific, Technical & Medical Publishers (STM). STM’s new “open access” licenses fail to meet the basic standards set out by the Freedom Definition and the Open Knowledge Definition. For example, they restrict commercial use, and in some cases even “competing” uses. They also restrict text and data mining, activities that should be permitted to ensure that researchers can grow and expand human knowledge.

The licenses also damage interoperability. Beyond being incompatible with reuse in Creative Commons-licensed works, two of the licenses are designed to be added on to other licenses, causing even more confusion. While the title says that they “add” rights, and the body speaks of “enabling” researchers, that is only true if the addendums are added to restrictive licenses that prohibit derivatives. If the addendums are combined with open licenses like Creative Commons Share-Alike licenses, the result would be a reduction of rights. Because these licenses and addendums are not compatible with our licensing policies, materials licensed under them cannot be uploaded to Wikimedia.

We join the Public Library of Science, Open Knowledge, and many other groups in urging STM to withdraw these licenses. We also urge publishers and authors who are considering these licenses to instead grow the knowledge commons by using standard, interoperable, open licenses like Creative Commons Attribution-ShareAlike and Creative Commons Zero.

Luis Villa, Deputy General Counsel at the Wikimedia Foundation