Shared posts

05 Jul 09:36

An evaluation of different partitioning strategies for Bayesian estimation of species divergence times

by Angelis K, Álvarez-Carretero S, dos Reis M, et al.
Abstract
The explosive growth of molecular sequence data has made it possible to estimate species divergence times under relaxed-clock models using genome-scale datasets with many gene loci. In order both to improve model realism and to best extract information about relative divergence times in the sequence data, it is important to account for the heterogeneity in the evolutionary process across genes or genomic regions. Partitioning is a commonly used approach to achieve those goals. We group sites that have similar evolutionary characteristics into the same partition and those with different characteristics into different partitions, and then use different models or different values of model parameters for different partitions to account for the among-partition heterogeneity. However, how to partition data in practical phylogenetic analysis, and in particular in relaxed-clock dating analysis, is more art than science. Here, we use computer simulation and real data analysis to study the impact of the partition scheme on divergence time estimation. The partition schemes had relatively minor effects on the accuracy of posterior time estimates when the prior assumptions were correct and the clock was not seriously violated, but showed large differences when the clock was seriously violated, when the fossil calibrations were in conflict or incorrect, or when the rate prior was mis-specified. Concatenation produced the widest posterior intervals with the least precision. Use of many partitions increased the precision, as predicted by the infinite-sites theory, but the posterior intervals might fail to include the true ages because of the conflicting fossil calibrations or mis-specified rate priors. We analyzed a dataset of 78 plastid genes from 15 plant species with serious clock violation and showed that time estimates differed significantly among partition schemes, irrespective of the rate drift model used. Multiple and precise fossil calibrations reduced the differences among partition schemes and were important to improving the precision of divergence time estimates. While the use of many partitions is an important approach to reducing the uncertainty in posterior time estimates, we do not recommend its general use for the present, given the limitations of current models of rate drift for partitioned data and the challenges of interpreting the fossil evidence to construct accurate and informative calibrations.
19 May 21:06

The teachers’ merry-go-round: job insecurity and staff turnover over the last 2 years

by Gianna Barbieri
The paper illustrates the recent trends in the use of contract staff and in staff turnover in the Italian education system, traditionally plagued by high levels of job insecurity and turnover. The system has been affected by the extraordinary scheme for hiring permanent teachers in 2015 and a mobility and reallocation plan the following year. The number of people on the national list of untenured teachers (frozen about 10 years ago) shrank from 124,000 to 47,000 (to which we should, however, add another 34,000 teachers added to a reserve list following a judgment by the State Council). The use of contract staff remained largely unchanged: the number of annual job contracts rose from 118,000 to 126,000, and fell from 14.6 to 14.4% as a share of total staff, the numbers of which have grown in the meantime. The share of those on the local lists of supply teachers (yet not required to follow a postgraduate internship program) increased among teachers with a yearly contract, who are on average younger than before. Staff mobility and turnover rose sharply, regardless of problems with implementation which led to a considerable rise in staff reassignment in the last school year.
30 Mar 12:29

Ancient recombination events between human herpes simplex viruses

by Burrel S, Boutolleau D, Ryu D, et al.
<span class="paragraphSection"><div class="boxTitle">Abstract</div>Herpes simplex viruses 1 and 2 (HSV-1 and HSV-2) are seen as close relatives but also unambiguously considered as evolutionary independent units. Here, we sequenced the genomes of 18 HSV-2 isolates characterized by divergent UL30 gene sequences to further elucidate the evolutionary history of this virus. Surprisingly, genome-wide recombination analyses showed that all HSV-2 genomes sequenced to date contain HSV-1 fragments. Using phylogenomic analyses, we could also show that two main HSV-2 lineages exist. One lineage is mostly restricted to sub-Saharan Africa while the other has reached a global distribution. Interestingly, only the worldwide lineage is characterized by ancient recombination events with HSV-1. Our findings highlight the complexity of HSV-2 evolution, a virus of putative zoonotic origin which later recombined with its human-adapted relative. They also suggest that co-infections with HSV-1 and 2 may have genomic and potentially functional consequences and should therefore be monitored more closely.</span>
27 Mar 09:44

Why do phylogenomic data sets yield conflicting trees? Data type influences the avian tree of life more than taxon sampling

by Reddy S, Kimball RT, Pandey A, et al.
<span class="paragraphSection"><div class="boxTitle">Abstract</div>Phylogenomics, the use of large-scale data matrices in phylogenetic analyses, has been viewed as the ultimate solution to the problem of resolving difficult nodes in the tree of life. However, it has become clear that analyses of these large genomic datasets can also result in conflicting estimates of phylogeny. Here we use the early divergences in Neoaves, the largest clade of extant birds, as a ‘model system’ to understand the basis for incongruence among phylogenomic trees. We were motivated by the observation that trees from two recent avian phylogenomic studies exhibit conflicts. Those studies used different strategies: 1) collecting many characters [∼42 mega base pairs (Mbp) of sequence data] from 48 birds, sometimes including only one taxon for each major clade; and 2) collecting fewer characters (∼0.4 Mbp) from 198 birds, selected to subdivide long branches. However, the studies also used different data types: the taxon-poor data matrix comprised 68% non-coding sequences whereas coding exons dominated the taxon-rich data matrix. This difference raises the question of whether the primary reason for incongruence is the number of sites, the number of taxa, or the data type. To test among these alternative hypotheses we assembled a novel, large-scale data matrix comprising 90% non-coding sequences from 235 bird species. Although increased taxon sampling appeared to have a positive impact on phylogenetic analyses the most important variable was data type. Indeed, by analyzing different subsets of the taxa in our data matrix we found that increased taxon sampling actually resulted in increased congruence with the tree from the previous taxon-poor study (which had a majority of non-coding data) instead of the taxon-rich study (which largely used coding data). We suggest that the observed differences in the estimates of topology for these studies reflect data-type effects due to violations of the models used in phylogenetic analyses, some of which may be difficult to detect. If incongruence among trees estimated using phylogenomic methods largely reflects problems with model fit developing more ‘biologically-realistic’ models is likely to be critical for efforts to reconstruct the tree of life.</span>
18 Oct 21:18

On a generalization of the preconditioned Crank-Nicolson Metropolis algorithm. (arXiv:1504.03461v2 [stat.CO] UPDATED)

by Daniel Rudolf, Björn Sprungk

Metropolis algorithms for approximate sampling of probability measures on infinite dimensional Hilbert spaces are considered and a generalization of the preconditioned Crank-Nicolson (pCN) proposal is introduced. The new proposal is able to incorporate information of the measure of interest. A numerical simulation of a Bayesian inverse problem indicates that a Metropolis algorithm with such a proposal performs independent of the state space dimension and the variance of the observational noise. Moreover, a qualitative convergence result is provided by a comparison argument for spectral gaps. In particular, it is shown that the generalization inherits geometric ergodicity from the Metropolis algorithm with pCN proposal.

26 Jul 06:59

On the Number of Many-to-Many Alignments of Multiple Sequences. (arXiv:1511.00622v2 [math.CO] UPDATED)

by Steffen Eger

We count the number of alignments of $N \ge 1$ sequences when match-up types are from a specified set $S\subseteq \mathbb{N}^N$. Equivalently, we count the number of nonnegative integer matrices whose rows sum to a given fixed vector and each of whose columns lie in $S$. We provide a new asymptotic formula for the case $S=\{(s_1,\ldots,s_N) \:|\: 1\le s_i\le 2\}$.

14 Nov 22:07

FDA reportedly will take on homeopathy

by whyevolutionistrue

We all know that homeopathy is a complete scam. Its conceptual basis alone renders it at once laughable and scary, while studies of its efficacy always show nothing above placebo effects. And yet the stuff is sold where apparently intelligent people shop, including Whole Foods and the Davis, California Food Coop, a place I used to shop as a postdoc. The homeopathic “remedies” in those places also make health claims, something that I always thought was illegal.

Well, according to both DeadState and BuzzFeed (the latter is apparently getting more serious), the Food and Drug Administration (FDA) in the US is set to take some regulatory action. The BuzzFeed piece, by Dan Vergano (who used to be my editor at USA Today), is actually quite good, giving, along with the news, a history of homeopathy and the evidence that it doesn’t work. An excerpt:

Homeopathic drugs are not subject to much regulatory scrutiny. The FDA ensures that their manufacturing facilities are clean, but the products are not evaluated for their health claims. (This is even less regulation than what’s required for dietary supplements, vitamins, herbs, and minerals, which face some limits on the health claims they can make.)

But that may change. This month, both the FDA, which oversees drug safety, and the Federal Trade Commission (FTC), which oversees drug ads, will end lengthy public comment periods that followed hearings on homeopathy. The FTC [Federal Trade Commission] smacked its sister drug-safety agency in public comments in August, calling for the FDA to crack down on homeopathic products, which “may harm consumers.”

About damn time! The harm is not just that they sometimes contain stuff that can actually harm people (like high levels of ethanol, or, in teething tablets, toxic levels of belladonna), but also that they can eat up time when sick people resort to ineffective homeopathic nostrums instead of scientific cures. (I’ve described already how a French friend with salivary-gland cancer tried to treat it homeopathically, but it got worse and he finally needed an operation. He appears to be okay now, thank Ceiling Cat).

But of course the quackery has its defenders, as always:

Still, homeopathy’s supporters say these remedies have a much better safety record than conventional drugs, along with millions of satisfied customers.

“We believe they are proven to be effective and extremely safe,” Ronald Whitmont, an internist at the New York Medical College and president of the American Institute of Homeopathy, told BuzzFeed News. The institute has been around since 1844 (longer than the American Medical Association), publishes its own research journal, and has certified dozens of physicians as “homeotherapeutics” specialists.

. . . But both Whitmont and Mark Land, president of the American Association of Homeopathic Pharmacists, support FDA and FTC moves to change the labels on homeopathic drugs and advertising to acknowledge they haven’t been tested for efficacy, similar to dietary supplements.

That would still keep homeopathic drugs on pharmacy shelves, however. Homeopathy “historically relies on recommendations from highly satisfied customers to influence sales,” Land told BuzzFeed News by email. “It’s an outstanding example of the U.S. free market.”

No, it’s an outstanding example of how unethical companies take example of people’s ignorance and credulity. It’s time to remove all claims of efficacy from these “drugs”, slap big warning labels on them, or, best yet, just remove them from pharmacy shelves. Companies like Whole Foods that sell homeopathic remedies should be ashamed of themselves.

h/t: Barry


11 Jan 23:20

Nicholas Wade writes a shamefully ignorant review of Bill Nye’s new evolution book

by whyevolutionistrue

For a long time I’ve thought that many of the senior science writers of the New York Times have outlived their usefulness. It might not be a function of age, but simply poor quality journalism. Regardless, the Times could use a serious shake-up in its science section.

Happily, one of their senior writers, Nicholas Wade, retired in 2012, but he still writes occasionally for the paper, and he recently published a shamefully bigoted and ignorant book on human races, A Troublesome Inheritance: Genes, Race and Human History (see the critical reviews in the Times itself, as well as in the New York Review of Books by my first student, Allen Orr).

And in the December 22 Wall Street Journal, Wade once again shows his failure to grasp my own field in a review of Bill Nye’s new book on the evidence for evolution, Undeniable: Evolution and the Science of Creation.  Wade’s review, called “Bill Nye, the Darwin Guy“, is deficient on a number of counts. I’ll highlight three (Wade’s piece is short):

1. What Wade considers the “most direct” and “most undeniable” evidence for evolution is dubious.  Curiously, Wade touts that evidence as some molecular data on gene substitutions:

Mr. Nye writes briskly and accessibly. He favors short, sound-bitey sentences. He is good on the geological and fossil evidence for evolution, reflecting his background in the physical sciences, but devotes less attention to changes in DNA, which furnish the most direct evidence of evolution. A recent paper in the journal PLOS Genetics, for instance, describes the seven DNA mutations that occurred over the past 90 million years in the gene that specifies the light-detecting protein of the retina. These mutations shifted the protein’s sensitivity from ultraviolet to blue, the first step in adapting a nocturnal animal to daytime vision and in generating the three-color vision of the human eye. Such insights into nature’s actual programming language are surely the most undeniable part of evolution at work.

I haven’t read the PLoS Genetics paper, but I’ve been told that it involves using mutation-making technology to alter visual proteins, and then seeing which amino acids that have changed also alter the perception of different wavelengths of light.

But that kind of stuff has been going on for a long time, and it’s hardly “direct”. While it does support natural selection, it’s somewhat inferential and, more important, could be dismissed by creationists as simply showing “microevolution.” If you want direct evidence of natural selection producing microevolution, why not use the many observations we have of selection operating in the wild, most famously the Grants’ work on the Galápagos finches? Isn’t that actually more direct than looking at protein changes that have occurred over millions of time. Or how about the formation of new species of plants that we’ve seen occur in the last 50 years? Or the changes in lactose tolerance that have occurred in pastoral populations (those that keep animals for milk) in the last 10,000 years? What, exactly, does Wade mean by “direct”?

Further, why aren’t changes in fossils, showing both trait changes and the evolution of new “kinds” (e.g., amphibians from fish, birds from dinosaurs, land-dwelling artiodactyls into whales, etc. etc. etc.) just as direct as (and even more undeniable than) looking at historical changes in proteins? Or all the evidence from embryology, vestigial organs, and biogeography that I adduce in WEIT? Why isn’t that just as direct  and undeniable as looking at changes in molecules that separate species? In fact, one could make the case that showing adaptive difference in protein function among species, as the PLoS Genetics authors probably did, don’t really count as decisive evidence for evolution. After all, couldn’t those protein differences have been put there by God? We weren’t there to see them happen, after all.

But one can’t make such a Goddy explanation for evidence like the fossil record or biogeography, and that’s why I downplayed protein-sequence evidence in my book. I was looking for the more undeniable evidence—stuff that creationists couldn’t easily counter. At any rate, Wade, excited by molecular biology, fails to realize that the case he cites might not be the most undeniable and direct evidence for evolution after all—and it could even be said to comport with creationism.

2. Wade sees a serious scientific problem in the supposedly short time during which life originated.  

Mr. Nye’s analysis also glosses over bristling perplexity. He says that there are a billion years between the Earth’s formation 4.5 billion years ago and the first fossil evidence of life, plenty of time for the chemical evolution of the first living cells. But this fact is long outdated. A heavy meteorite bombardment some 3.9 billion years ago probably sterilized the planet, yet the first possible chemical evidence of life appears in rocks some 3.8 billion years old. This leaves startlingly little time for the first living cells to have evolved. Reconstruction of the chemical steps by which they did so is a daunting and so far unsolved problem. Mr. Nye might have done better to concede as much.

As we’ll see below, Wade almost seems to feel that there is something problematic in the neo-Darwinian theory of evolution that might render it wrong. Abiogenesis—the origin of life from nonlife—is something that he, along with creationists, sees as such a problem. But the paragraph above is misleading. “Sterilizing the planet” means “killing everything alive on Earth.” While that might have happened, it didn’t mean that the chemical precursors of life would be eradicated. Also, there is controversy about whether that meteorite bombardment had the effects he said it did, or even when it occurred. We know that indubitable evidence for life appears about 3.5 billion years ago (I don’t trust his 3.8-billion-year figure), so that gives about half a billion years for the first cell (a bacterium) to appear. Is that really “startlingly little time”? Has Wade made any models that show it’s an unrealistic time? It’s 500,000,000 years, which is pretty long, and if the precursor chemicals were there beforehand, the time for the origin of life becomes even longer.

Wade also doesn’t mention that the “living cells” he mentions are prokaryotic cells like bacteria, lacking a nucleus and much of the chemical and structural complexity of “true” eukaryotic cells, which didn’t appear until 1.6 billion years ago—2.3 billion years after the “sterilization”. What Wade sees as a daunting problem isn’t an unsuperable problem. Yes, it’s unsolved, but there are many things about evolution that we don’t understand, like what proto-bats looked like. What does Wade want Nye to concede: that we don’t yet understand the origin of life? Fine, then, concede it, but add that we’re making great strides in solving that problem.

3. Wade suggests a compromise between evolutionists and creationists which is simply insane. This is the most infuriating part of his review. Here’s what Wade says will bring amity between the two groups (my emphasis):

Mr. Nye’s fusillade of facts won’t budge them an inch. Isn’t there some more effective way of persuading fundamentalists to desist from opposing the teaching of evolution? If the two sides were willing to negotiate, it would be easy enough to devise a treaty that each could interpret as it wished. In the case of teaching evolution in schools, scientists would concede that evolution is a theory, which indeed it is. Fundamentalists might then be willing to let their children be taught evolution, telling them it is “just a theory.” Evolution, of course, is no casual surmise but a theory in the solemn scientific sense, a grand explanatory system that accounts for a vast range of phenomena and is in turn supported by them. Like all scientific theories, however, it is not an absolute, final truth because theories are always subject to change and emendation.

Yeah, like that suggestion is going to get fundamentalists to agree to the teaching of evolution! Note to Wade: creationists aren’t stupid enough to buy your little plan. They don’t want evolution taught as the only theory that explains the origin and diversity of life, however that theory is characterized.

Further, scientists have already “conceded” (as Wade puts it) that evolution is a theory. But it’s not only a theory, for it’s so well supported by the data that, as I show in WEIT, it’s also regarded as a fact. (What I mean by “evolution” here are these five tenets: genetic change over time, populational change that is not instantaneous, speciation, common ancestry of all species, and natural selection as the cause of apparent design.)  Will creationists really allow the scientific notion of “theory”, as well as a summary of the mountains of evidence that show evolution to be not just a theory but a scientific truth, to be taught to their kids?

Yes, there are some conceivable observations that could invalidate evolution, but we’ve had over 150 years to find them, with creationists working furiously on that job, and no such observations have appeared. To say that evolution is “always subject to change and emendation” is like saying that “the fact that DNA is a double helix, viruses cause Ebola, the Earth goes around the Sun, and the formula of water is H2O” are all theories “subject to change and emendation”. The fact is that some “theories” are highly unlikely to change because the evidence supporting them is wickedly strong, and to claim that they are somehow shaky or dubious is misleading. I would never countenance saying that any of these scientific notions are “just theories,” for the word “just,” as Wade knows well, implies that the evidence supporting them is somehow shaky.

Wade goes on to fulminate about the dogmatism of evolutionists, and touts the uncertainty of evolutionary biology by using the example of group selection, which, he says, is undecided and therefore makes all of evolution appear as “just a theory.” But group selection is a modern add-on to evolutionary biology, and it’s an unsubstantiated hypothesis. Group selection is not a theory in Wade’s sense, something “that accounts for a vast range of phenomena and is in turn supported by them.” It accounts for no phenomena and there are no observations that require us to accept group selection. To say that evolution is “just a theory” because we haven’t settled the question of group selection is like saying that modern particle physics is “just a theory” because string theory is sitting out there as an unresolved problem.

At the end, Wade reiterates his brilliant suggestion for a pact between evolutionists and creationists:

If popularizers like Mr. Nye could allow that the theory of evolution is a theory, not an absolute truth or dogma, they might stand a better chance of getting the fundamentalists out of the science classroom.

Sorry, Mr. Wade, but we already allow that. No observation in science is an “absolute truth or dogma,” but some things are so likely to be true that you’d bet your house on them. One of those things is evolution. In any vernacular sense of the word, evolution is simply true—as true as the fact that DNA is normally a double helix and a normal water molecule has two atoms of hydrogen and one of oxygen.

Wade’s suggestion is ludicrous and, in the end, shows that he really doesn’t understand the nature of science and scientific truth. Nor does he have the slightest idea of how creationists really behave. They’ll no sooner accept his compromise then they’ll admit that there’s no God. After this piece, and Wade’s egregious book A Troublesome Inheritance, the man’s Official Science Writer™ Card should be revoked.


30 May 23:21

[Correspondence] Time to reconsider thyroid cancer screening in Fukushima

by Kenji Shibuya, Stuart Gilmour, Akira Oshima
In October, 2011, as part of the Fukushima Health Management Survey, Fukushima prefecture implemented a thyroid ultrasound examination programme for all children younger than 18 years to “ensure early identification and treatment of thyroid cancer in children.” The Fukushima prefecture collected baseline thyroid cancer prevalence data until March, 2014, assuming “no excess occurrence [of thyroid cancer] in the first three years [after the Fukushima Dai-ichi nuclear accident in March 2011]”. Regular thyroid examinations began in April, 2014, and will be compared with these baseline data.
12 May 20:55

Coherently Embedded Ag Nanostructures in Si: 3D Imaging and their application to SERS

by R. R. Juluri

Surface enhanced Raman spectroscopy (SERS) has been established as a powerful tool to detect very low-concentration bio-molecules. One of the challenging problems is to have reliable and robust SERS substrate. Here, we report on a simple method to grow coherently embedded (endotaxial) silver nanostructures in silicon substrates, analyze their three-dimensional shape by scanning transmission electron microscopy tomography and demonstrate their use as a highly reproducible and stable substrate for SERS measurements. Bi-layers consisting of Ag and GeOx thin films were grown on native oxide covered silicon substrate using a physical vapor deposition method. Followed by annealing at 800C under ambient conditions, this resulted in the formation of endotaxial Ag nanostructures of specific shape depending upon the substrate orientation. These structures are utilized for detection of Crystal Violet molecules of 5 1010M concentrations. These are expected to be one of the highly robust, reusable and novel substrates for single molecule detection.

Scientific Reports 4 doi: 10.1038/srep04633

12 Jan 23:14

Trade Liberalization and Wage Inequality: New Insights from a Dynamic Trade Model with Heterogeneous Firms and Comparative Advantage

by Wolfgang Lechthaler, Mariya Mileva
We develop a dynamic general equilibrium trade model with comparative advantage, heterogeneous firms, heterogeneous workers and endogenous firm entry to study wage inequality during the adjustment after trade liberalization. We find that trade liberalization increases wage inequality both in the short run and in the long run. In the short run, wage inequality is mainly driven by an increase in inter-sectoral wage inequality, while in the medium to long run, wage inequality is driven by an increase in the skill premium. Incorporating worker training in the model considerably reduces the effects of trade liberalization on wage inequality. The effects on wage inequality are much more adverse when trade liberalization is unilateral instead of bilateral or restricted to specific sectors instead of including all sectors
10 Jan 18:08

Blame Canada: Toronto university sparks fracas by supporting student’s refusal to work with women

by whyevolutionistrue
Readers Diana and Lynn, both Canadians, called my attention to a news item from Ontario—and a public debate—taking place about the conflict between state and religion. It’s every bit as portentous as the burqa debate in Europe (France, by the way, just upheld its ban on the burqa by convicting a wearer), but doesn’t seem to have gotten on the international radar screen.
The issue has been reported by the CBC, the Star, and the National Post, but the quotes (indented) are from the Star, whose coverage is most complete. This took place at York University in Toronto, which, like most universities in Canada, is a public school.
In short, a male student of unknown faith (the possibilities include Orthodox Judaism or Islam, but I suspect the latter) asked to opt out of participating in a sociology class’s focus group because it included women—and associating with women violates his religion. The professor refused this request as outrageous, but—and here’s the kicker—the York administration ordered the prof to comply.  The professor, Dr. Paul Grayson, blew the whistle on his administration. That is a brave guy!
Here are the details:

The brouhaha began in September when a student in an online sociology class emailed Grayson about the class’s only in-person requirement: a student-run focus group.

“One of the main reasons that I have chosen internet courses to complete my BA is due to my firm religious beliefs,” the student wrote. “It will not be possible for me to meet in public with a group of women (the majority of my group) to complete some of these tasks.”

While Grayson’s gut reaction was to deny the request, he forwarded the email to the faculty’s dean and the director for the centre for human rights.

Their response shocked him; the student’s request was permitted.

The reasoning was apparently that students studying abroad in the same online class were given accommodations, and allowed to complete an alternative assignment.

“I think Mr. X must be accommodated in exactly the same way as the distant student has been,” the vice dean wrote to Grayson.

That, of course, is insane, because the student was not overseas and his refusal was due not to the inability to travel to Canada, but because he just didn’t want to work with women. And Grayson, in his reply to the dean, pulled no punches:

“York is a secular university. It is not a Protestant, Catholic, Jewish, or Moslem university. In our policy documents and (hopefully) in our classes we cling to the secular idea that all should be treated equally, independent of, for example, their religion or sex or race.

“Treating Mr. X equally would mean that, like other students, he is expected to interact with female students in his group.”

In a masterpiece of political correctness, the University stuck to its guns:

A university provost, speaking on behalf of the dean, said the decision to grant the student’s request was made after consulting legal counsel, the Ontario Human Rights Code and the university’s human rights centre.

“Students often select online courses to help them navigate all types of personal circumstances that make it difficult for them to attend classes on campus, and all students in the class would normally have access to whatever alternative grading scheme had been put in place as a result of the online format,” said Rhonda Lenton, provost and vice president academic.

The director of the Centre for Human Rights also weighed in on the decision in an email to Grayson.

“While I fully share your initial impression, the OHRC does require accommodations based on religious observances.”

Well, perhaps it does, but religious accommodations must give way when they conflict with the public good, and this is a public university. Refusing to associate with women is nothing other than an attempt to cast them as second-class citizens, and that human right trumps whatever misogyny is considered a “religious right.” If Mr. X wants to go to a synagogue in which women must sit in the back, or a mosque in which women can’t pray with men, that is his right, but he doesn’t have any right to make a public university accommodate that lunacy, any more than University College London can enforce gender-segregated seating at public lectures.

What’s the logical outcome of this kind of pandering to religion? Grayson again gave the university no quarter:

The professor argued that if a Christian student refused to interact with a black student, as one could argue with a skewed interpretation of the Bible, the university would undoubtedly reject the request.

“I see no difference in this situation,” Grayson wrote.

The interesting thing is that after hearing from the dean, Grayson (not knowing the student’s religion) consulted both Orthodox Jewish and Islamic scholars at York, who both told him that there was no bar to associating with women in their faiths so long as there was no physical contact. On that basis, Grayson and his colleagues in the sociology department refused the student’s request.

In the end, the student gave in. That might be the end of it, but Grayson still may face disciplinary action (like him, though, I doubt it). Who looks bad here is the university, which would even consider granting such a request.

Apparently this kind of clash between religious and secular values is not unique in Canadian education. As the Star reports:

The incident is the latest clash between religious values and Ontario’s secular education system.

Catholic schools resisted a call by Queen’s Park to allow so-called gay-straight student clubs because of the Vatican’s historic stand against homosexuality. But the government insisted such clubs be permitted as a tool against bullying — and a nod to Ontario’s commitment to freedom of sexual orientation.

Similar debate erupted in 2011 when a Toronto school in a largely Muslim neighbourhood allowed a Friday prayer service in the school cafeteria so that students would not leave for the mosque and not return.

However fewer cases have taken place at the post-secondary level.

Well, public schools are public schools, and they’re all supported by taxpayers. Just like a public university cannot teach creationism as science in the U.S. (at least at Ball State University), so a public university in Canada cannot discriminate against women, even in the name of catering to religious faith. How can the government insist that Catholic schools accept “gay-straight” clubs on the grounds of supporting freedom of sexual orientation, yet allow a student, also on religious grounds, to discrimiante against women?

There is no end to crazy religious beliefs, and I see no reason why basic human rights should be abrogated to cater to all those beliefs. The administration of York University now has egg on its face, and Professor Grayson is the hero.

There is now a Care 2 petition that you can sign directed to Martin Singer, Dean of the Faculty of Liberal Arts & Professional Studies, and Noël A. J. Badiou, Director at York University’s Centre for Human Rights, those who supported the student’s right to refuse to associate with women. It reads, in part:

We the undersigned stand up for women’s and men’s equality, as enshrined in the Canadian Charter of Rights and Freedoms, and the Ontario Human Rights Code.

The statements and decisions made in this matter by Mr. Singer and Mr. Badiou suggest that they believe gender equality is subordinate to religious beliefs. We urge York University to retract this and re-affirm their stand on gender equality and women’s rights.

You don’t have to be Canadian to sign it, and right now there are only 453 signatures. They’re aiming for 1,000, so if you agree, head over to this link and add your name.


31 Dec 13:49

Experimental Evidence on the Effects of Home Computers on Academic Achievement among Schoolchildren

by Fairlie, Robert W., Robinson, Jonathan
Computers are an important part of modern education, yet many schoolchildren lack access to a computer at home. We test whether this impedes educational achievement by conducting the largest-ever field experiment that randomly provides free home computers to students. Although computer ownership and use increased substantially, we find no effects on any educational outcomes, including grades, test scores, credits earned, attendance and disciplinary actions. Our estimates are precise enough to rule out even modestly-sized positive or negative impacts. The estimated null effect is consistent with survey evidence showing no change in homework time or other "intermediate" inputs in education.
31 Dec 13:38

Revisionism vs. Classical Compatiblism, part II

by Joseph Campbell

In conversation, Bert Baumgaertner (UI) and Michael Goldsby (WSU) have suggested another way of contrasting my view with Vargas’ view.

First, there is revisionism 1.0: Who cares what the folk think? What matters is what philosophers think, for “free will” is a term of art. Next there is revisionism 1.1: The folk conception of free will is different from the concept we should adopt but we should adopt philosophical compatibilism. This is Vargas’ view. Lastly, there is revisionism 1.2: The folk conception of free will is generally speaking the concept we should adopt but it needs some tweaking here and there. This is Nahmias’ view. Ultimately the difference between Vargas and me is that I lean more toward revisionism 1.2 and even 1.0 than revisionism 1.1.

Manuel suggests that version 1.0 isn't a version of revisionism at all. Perhaps it is best characterized as anti-revisionist and the real debate is between versions 1.1 and 1.2.

Here is where I get confused and uncomfortable. I want to say that there is a core concept of free will that is aligned with sourcehood or the ability to do otherwise or whatever -- something neutral wrt to the compatibility problem. Any supposed connections between the core concept and incompatibilism (or compatibilism, for that matter) are the result of fallacious reasoning.

Can I say this? It isn't clear what would count as evidence for such a claim.

14 Dec 14:45

Peter Tse's The Neural Basis of Free Will: An Overview

by Thomas Nadelhoffer

A while back I posted an exchange between Peter Tse and Neil Levy that focused on parts of Peter's new book, The Neural Basis of Free Will: Criterial Causation.  In the wake of that discussion, I asked Peter if he would be interested in writing up an accessible overview of the argument he develops in the book.  Fortunately, he was happy to oblige!  The following is what he sent me to post here on Flickers.  Given the intersection between work in neuroscience and work on the philosophy of action, I think we all need to work a little harder to understand what's happening on the other half of this discplinary divide. In that spirit, I have posted Peter's overview below the fold.  Hopefully, everyone will join the ensuing discussion.  If he's right, then free will skeptics like me have some work to do!

Absract: In my book I use recent developments in neuroscience to show how volitional mental events can be causal within a physicalist paradigm. (1) I begin by attacking the logic of Jaegwon Kim’s exclusion argument, according to which mental information cannot be causal of physical events. I argue that the exclusion argument falls apart if indeterminism is the case. If I am right, I must still build an account of how mental events are causal in the brain. To that end I take as my foundation (2) a new understanding of the neural code that emphasizes rapid synaptic resetting over the traditional emphasis on neural spiking. (3) Such a neural code is an instance of ‘criterial causation,’ which requires modifying standard interventionist conceptions of causation. A synaptic reweighting neural code provides (4) a physical mechanism that accomplishes downward informational causation, (5) a middle path between determinism and randomness, and (6) a way for mind/brain events to turn out otherwise. This ‘synaptic neural code’ allows a constrained form of randomness parameterized by information realized in and set in synaptic weights, which in turn allows physical/informational criteria to be met in multiple possible ways when combined with an account of how randomness in the synapse is amplified to the level of randomness in spike timing. This new view of the neural code also provides (7) a way out of self-causation arguments against the possibility of mental causation. It leads to (8) an emphasis on deliberation and voluntary attentional manipulation as the core of volitional mental causation rather than, say, the correlates of unconscious premotor computations seen in Libet’s readiness potentials. And this new view of the neural code leads to (9) a new theory of the neural correlates of qualia as the ‘precompiled’ informational format that can be manipulated by voluntary attention, which gives qualia a causal role within a physicalist paradigm. I elaborate each of these ideas in turn below.

 

(1) Countering Kim’s exclusion argument. The exclusion argument is, roughly, that the physical substrate does all the causal work that the supervenient mental state is supposed to do, so mental or informational events can play no causal role in material events. On Kim’s reductionistic view, all causation seeps away to the rootmost physical level, i.e. particles or strings. Add to that an assumption of determinism, and the laws of physics applicable at the rootmost level are sufficient to account for event outcomes at that level and every level that might supervene on that level. So informational causation, including voluntary mental causation or any type of free will that relies on it, is ruled out.

I argue that indeterminism undermines this sufficiency, so provides an opening whereby physically realized mental events could be downwardly causal. I argue that biological systems introduced a new kind of physical causation into the universe, one based upon triggering physical actions in response to detected spatiotemporal patterns in energy. This is a very different kind of causation than traditional Newtonian conceptions of the causal attributes of energy, such as mass, momentum, frequency or position, which seem to underlie deterministic and exclusionary intuitions. But patterns, unlike amounts of energy, lack mass and momentum and can be created and destroyed. They only become causal if there are physical detectors that respond to some pattern in energetic inputs. Basing causal chains upon successions of detected patterns in energy, rather than the transfer of energy among particles, opens the door not only to informational downward causation but to causal chains (such as mental causal chains or causal chains that might underlie a game of baseball or bridge) that are not describable by or solely explainable by the laws of physics applicable at the rootmost level. Yes, a succession of patterns must be realized in a physical causal chain that is consistent with the laws of physics, but many other possible causal chains that are also consistent with physical laws are ruled out by informational criteria imposed on indeterministic particle outcomes. Physical/informational criteria set in synaptic weights effectively sculpt informational causal chains out of the ‘substrate’ of possible physical causal chains.

(2) A new view of the neural code: I develop a new understanding the neural code that emphasizes rapid and dynamic synaptic weight resetting over neural firing as the core engine of information processing in the brain. The neural code is not solely a spike code, but a code where information is transmitted and transformed by flexibly and temporarily changing synaptic weights on a millisecond timescale. One metaphor is the rapid reshaping of the mouth (analogous to rapid, temporary synaptic weight resetting) that must take place just before vibrating air (analogous to spike trains) passes through, if information is to be realized and communicated. What rapid synaptic resetting allows is a moment by moment changing of the physical and informational parameters or criteria that have to be met before a neuron will fire. This dictates what information neurons will be responsive to and what they will ‘say’ to one another from moment to moment.

(3) Rethinking interventionist models of causation: Standard interventionist models of causation manipulate A to determine what effects, if any, there might be on B and other variables. If instead of manipulating A's output, we manipulate the criteria, parameters or conditions that B places on A's input, which must be satisfied before B changes or acts, then changes in B do not follow passively from changes in A as they would if A and B were billiard balls. Inputs from A can be identical, but in one case B changes in response to A, and in another it does not. This constant reparameterization of B is what neurons do when they change each other's synaptic weights. What I call "criterial causation" emphasizes that what can vary is either outputs from A to other nodes, or how inputs from A are decoded by receiving nodes. On this view, standard interventionist and Newtonian models of causation are a special case where B places no conditions on input from A. But the brain, if anything, emphasizes causation via reparameterization of B, by, for example, rapidly changing synaptic weights on post-synaptic neurons.

(4) How downward causation works: Downward causation means that events at a supervening level can influence outcomes at the rootmost level. In this context it would mean that information could influence particle paths. While it would be impossible self-causation if a supervening event changed its own present physical basis, it is not impossible that supervening events, such as mental information, could bias future particle paths. How might this work in the brain? The key pattern in the brain to which neurons respond is temporal coincidence. A neuron will only fire if it receives a certain number of coincident inputs from other neurons. Criterial causation occurs where physical criteria imposed by synaptic weights on coincident inputs in turn realize informational criteria for firing. This permits information to be downwardly causal regarding which indeterministic events at the rootmost level will be realized; Only those rootmost physical causal chains that meet physically realized informational criteria can drive a postsynaptic neuron to fire, and thus become causal at the level of information processing. Typically the only thing that the set of all possible rootmost physical causal chains that meet those criteria have in common is that they meet the informational criteria set. To try to cut information out of the causal picture here is a mistake; The only way to understand why it is that just this subset of possible physical causal chains—namely those that are also informational causal chains—can occur, is to understand that it is informational criteria that dictate that class of possible outcomes.

The information that will be realized when a neuron’s criteria for firing have been met is already implicit in the set of synaptic weights that impose physical criteria for firing that in turn realize informational criteria for firing. That is, the information is already implicit in these weights before any inputs arrive, just as what sound your mouth will make is implicit in its shape before vibrating air is passed through. Assuming indeterminism, many combinations of possible particle paths can satisfy given physical criteria, and many more cannot. The subset that can satisfy the physical criteria needed to make a neuron fire is also the subset that can satisfy the informational criteria for firing (such as ‘is a face’) that those synaptic weights realize. So sets of possible paths that are open to indeterministic elementary particles which do not also realize an informational causal chain are in essence “deselected” by synaptic settings by virtue of the failure of those sets of paths to meet physical/informational criteria for the release of a spike.

(5) Between determinism and randomness: Hume (1739) wrote “’tis impossible to admit of any medium betwixt chance and an absolute necessity.” Many other philosophers have seen no middle path to free will between the equally ‘unfree’ extremes of determinism and randomness. They have either concluded that free will does not exist, or tried to argue that a weak version of free will, namely, ‘freedom from coercion,’ is compatible with determinism.

A strong conception of free will, however, is not compatible with either determined or random choices, because in the determined case there are no alternative outcomes and things cannot turn out otherwise, while in the random case what happens does not happen because it was willed. A strong free will requires meeting some high demands: Beings with free will (a) must have information processing circuits that have multiple courses of physical or mental activity open to them; (b) they must really be able to choose among them; (c) they must be or must have been able to have chosen otherwise once they have chosen; and (d) the choice must not be dictated by randomness alone, but by the informational parameters realized in those circuits. This is a tough bill to fill, since it seems to require that acts of free will involve acts of self-causation.

Criterial causation offers a middle path between the two extremes of determinism and randomness that Hume was not in a position to see, namely, that physically realized informational criteria parameterize what class of neural activity can be causal of subsequent neural events. The information that meets preset physical/informational criteria may be random to a degree, but it must meet those criteria if it is to lead to neural firing, so is not utterly random. Preceding brain activity specifies the range of possible random outcomes to include only those that meet preset informational criteria for firing.

(6) How brain/mind events can turn out otherwise: The key mechanism, I argue, whereby atomic level indeterminism has its effects on macroscopic neural behavior is that it introduces randomness in spike timing. There is no need for bizarre notions such as consciousness collapsing wave packets or any other strange quantum effects beyond this. For example, quantum level noise expressed at the level of individual atoms, such as single magnesium atoms that block NMDA receptors, is amplified to the level of randomness and near chaos in neural and neural circuit spiking behavior. A single photon can even trigger neural firing in a stunning example of amplification from the quantum to macroscopic domains. The brain evolved to harness such ‘noise’ for information processing ends. Since the system is organized around coincidence detection, where spike coincidences (simultaneous arrival of spikes) are key triggers of informational realization (i.e. making neurons fire that are tuned to particular informational criteria), randomizing which incoming spike coincidences might meet a neuron's criteria for firing means informational parameters can be met in multiple ways just by chance.

(7) Skirting self-causation: A synaptic account of the neural code also gets around some thorny problems of self-causation that have been used to argue against the possibility of mental causation. The traditional argument is that a mental event realized in neural event x cannot change x because this would entail impossible self-causation. Criterial causation gets around this by granting that present self-causation is impossible. But it allows neurons to alter the physical realization of possible future mental events in a way that escapes the problem of self-causation of the mental upon the physical. Mental causation is crucially about setting synaptic weights. These serve as the physical grounds for the informational parameters that must be met by unpredictable future mental events.

(8) Voluntary attention and free will: I argue that the core circuits underlying free choice involve frontoparietal circuits that facilitate deliberation among options that are represented and manipulated in executive working memory areas. Playing out scenarios internally as virtual experience allows a superthreshold option to be chosen before specific motoric actions are planned. The chosen option can best meet criteria held in working memory, constrained by conditions of various evaluative circuits, including reward, emotional and cognitive circuits. This process also harnesses synaptic and ultimately atomic level randomness to foster the generation of novel and unforeseeable satisfactions of those criteria. Once criteria are met, executive circuits can alter synaptic weights on other circuits that will implement a planned operation or action.

(9) A new theory of qualia: The paradigmatic case of volitional mental control of behavior is voluntary attentional manipulation of representations in working memory such as the voluntary attentional tracking of one or a few objects among numerous otherwise identical objects. If there is a flock of indistinguishable birds, there is nothing about any individual bird that makes it more salient. But with volitional attention, any bird can be marked and kept track of. This salience is not driven by anything in the stimulus. It is voluntarily imposed on bottom-up information, and can lead to eventual motoric acts, such as shooting or pointing at the tracked bird. This leads to viewing the neural basis of attention and consciousness as not only realized in part in rapid synaptic reweighting, but also in particular patterns of spikes that serve as higher level units that traverse neural circuits and open what I call the ‘NMDA channel of communication.’ Qualia are necessary for volitional mental causation because they are the only informational format available to volitional attentional operations. Actions that follow volitional attentional operations, such as volitional tracking, cannot happen without consciousness. Qualia on this account are a ‘precompiled’ informational format made available to attentional selection and operations by earlier, unconscious information processing.

Conclusion: Assuming indeterminism, it is possible to be a physicalist who adheres to a strong conception of free will. On this view, mental and brain events really can turn out otherwise, yet are not utterly random. Prior neuronally realized information parameterizes what subsequent neuronally realized informational states will pass presently set physical/informational criteria for firing. This does not mean that we are utterly free to choose what we want to want. Some wants and criteria are innate, such as what smells good or bad. However, given a set of such innate parameters, the brain can generate and play out options, then select an option that adequately meets criteria, or generate further options. This process is closely tied to voluntary attentional manipulation in working memory, more commonly thought of as deliberation or imagination. Imagination is where the action is in free will.

14 Dec 14:36

Who Owns the Code of Life?

by Peter W. Huber

Who Owns the Code of Life?

Washington control of genetic data would send us down the road to medical serfdom.

Myriad Genetics spent half a billion dollars developing a database of mutations of the BRCA1 and BRCA2 genes. When proteins coded in the genes (as shown in BRCA1 above) fail to do their genetic repair work properly, cancer risk is increased.
DR. MARK J. WINTER/SCIENCE SOURCE
Myriad Genetics spent half a billion dollars developing a database of mutations of the BRCA1 and BRCA2 genes. When proteins coded in the genes (as shown in BRCA1 above) fail to do their genetic repair work properly, cancer risk is increased.

A vast amount of valuable biochemical know-how is embedded in genes and in the complex biochemical webs that they create and control. Does the fact that nature invented it mean that it’s up for grabs? Lots of people are eager to grab it, Washington’s aspiring managers of our health-care economy now prominent among them.

At issue in the Myriad Genetics case decided by the Supreme Court in June were patent claims involving two genes, BRCA1 and BRCA2. Myriad has spent $500 million analyzing thousands of samples submitted by doctors and patients in search of BRCA mutations that sharply increase a woman’s risk of developing breast or ovarian cancer. Whenever the company found a mutation that it hadn’t seen before, it offered free testing to the patient’s relatives in exchange for information about their cancer history. As a result, Myriad can now predict the likely effects of about 97 percent of the BRCA mutations that it receives for analysis, up from about 60 percent 17 years ago.

In Myriad, all nine justices agreed that merely being the first to isolate a “naturally occurring” gene or other “product of nature” doesn’t entitle you to a patent. That came as no surprise. A year earlier, in Mayo v. Prometheus Labs, the Court had rejected a patent for a way to prescribe appropriate doses of certain drugs by tracking their metabolites in the patient’s blood, reaffirming long-standing rules that patents may not lay claim to a “law of nature,” “natural phenomenon,” or “abstract idea.” Previous rulings by the federal circuit court that decides patent appeals had reached similar conclusions in addressing attempts to patent all drugs that might be designed to control specific biochemical pathways. A description of a biological “mechanism of action” without a description of a new device or method to exploit it in some useful way is merely a “hunting license” for inventions not yet developed. A patent must describe “a complete and final invention”—a drug, for example, together with a description of the disorder that it can cure.

But very often, much of the cost of inventing the patentable cure is incurred working out the non-patentable molecular mechanics of the disease because all innovation in molecular medicine must in some way mimic or mirror molecular mechanisms of action already invented by nature. And spurred by the enormous promise of precisely targeted molecular medicine, a substantial part of our health-care economy is now engaged in working out the molecular mechanics of diseases. Washington is funding genomic research projects. Drug companies are heavily involved, joined by a rapidly growing number of diagnostic service companies. Doctors are gathering reams of new molecular data, patient by patient. Hospitals are mining their records for internal use and for sale to outsiders. Device manufacturers are racing to provide molecular diagnostic capabilities directly to consumers. Private insurance companies are mining the information they receive when claims are filed. And there are many signs that Washington intends to take charge of all of the above, as it tightens its grip on diagnostic devices and tests, what doctors diagnose, which diagnoses insurers cover at what price, and how the information acquired is distributed and used.

The patentability of genes is thus only one piece of a much broader debate about who will own and control the torrents of information that we have recently begun to extract from the most free, fecund, competitive, dynamic, intelligent, and valuable repository of know-how on the planet—life itself. That the private sector is already actively engaged in the extraction and analysis is a promising sign, but getting it fully engaged will require robust intellectual property rights, framed for a unique environment in which every fundamentally new invention must be anchored in a new understanding of some aspect of molecular biology. Individual gene patents are out of the picture now, but other forms of intellectual property already provide some protection. We should reaffirm and expand them. And we should view Washington’s plans to take charge instead for what they are: the most ambitious attempt to control the flow of information that the world has ever seen.

Drug companies began systematically exploring the molecular mechanics of diseases more than 30 years ago. Sometimes a gene itself is the essence of the cure. The insulin used by diabetics was extracted from pigs and cows until Genentech and Eli Lilly inserted the human insulin gene into a bacterium and brought “humulin” to market in 1982. Other therapies use viruses to insert into the patient’s cells a gene that codes for a healthy form of a flawed or missing protein. Recent “cancer immunotherapy” trials have shown great promise: the patient is treated with his or her own immune-system cells, genetically modified to induce them to attack their cancerous siblings. More often, a drug is designed to target a protein associated with a specific gene. “Structure-based” drug design and the biochemical wizardry used to produce monoclonal antibodies allow biochemists to craft molecules precisely matched to a target protein that plays a key role in, say, replicating HIV or a cancer cell. An FDA official recently estimated that 10 percent to 50 percent of drugs in pharmaceutical companies’ pipelines involve targeted therapies, and about one-third of new drugs approved by the FDA last year included genetic patient-selection criteria.

The development and use of all such therapies hinge, however, on understanding the roles that genes and proteins play in causing medically significant clinical effects. And as Myriad’s huge database illustrates, what looks like a single disorder to the clinician can often be caused by many different variations in genes that may interact in complex ways and that sometimes change on the fly, as they do in fast-mutating cancer cells or viruses like HIV. The molecular-to-clinical links are still more complex when drugs are added to modulate one or more of the patient-side molecules and unintended side effects enter the picture.

The databases and sophisticated analytical engines that must be developed to unravel these causal connections usually go far beyond “abstract ideas” or what any scientist would call a “law of nature.” To guide the prescription of HIV drug cocktails, Europe’s EuResist Network draws on data from tens of thousands of patients involving more than 100,000 treatment regimens associated with more than a million records of viral genetic sequences, viral loads, and white blood cell counts. Oncologists now speak of treatment “algorithms”—sets of rules for selecting and combining multiple drugs in cocktails that must often be adjusted during the course of treatment, as mutating cancer cells become resistant to some drugs and susceptible to others. IBM recently announced the arrival of a system to guide the prescription of cancer drugs, developed in partnership with WellPoint and Memorial Sloan-Kettering and powered by the supercomputer that won the engine-versus-experts challenge on Jeopardy. It has the power to sift through more than a million patient records representing decades of cancer-treatment history, and it will continue to add records and learn on the job.

Other companies are vying to combine efficient DNA-sequencing systems with interpretive engines that can analyze large numbers of genes and the clinical data needed to reveal their medical implications at prices that rival what Myriad is charging to analyze just two genes. Last year, 23andMe, a company founded to provide consumer genetic-sequencing services, announced that it would let other providers develop applications that would interact with data entrusted to 23andMe by its customers. Hundreds soon did. Their interests, Wired reported, included “integrating genetic data with electronic health records for studies at major research centers and . . . building consumer-health applications focused on diet, nutrition and sleep.” For individuals, 23andMe’s platform will, in the words of the company’s director of engineering, serve as “an operating system for your genome, a way that you can authorize what happens with your genome online.” Numerous websites are already coordinating ad hoc “crowd-sourced” studies of how patients respond to treatments for various diseases. The not-for-profit Cancer Commons is pursuing an “open science initiative linking cancer patients, physicians, and scientists in rapid learning communities.”

How much protection, if any, these databases, online services, and analytical engines will receive from our intellectual property laws remains to be seen. Indeed, where Myriad leaves the thousands of gene patents already granted is unclear—it will take years of further litigation to find out. Genes themselves or their biochemical logic are routinely incorporated into a wide variety of medical diagnostic tests and therapies; other sectors of the economy use genetically engineered cells, plants, and live organisms. Most such applications of genetic know-how will probably remain patentable because the constituent parts of an innovative product or process need not be patentable in themselves. Innovative methods for sequencing genes or analyzing their clinical implications will also remain patentable. What does seem clear is that in Myriad, the Court went out of its way to reaffirm and even broaden the scope of its earlier Mayo decision: a claim involving nothing more than the mechanistic science or an empirical correlation that links molecular causes to clinical effects isn’t patentable.

From the innovator’s perspective, however, patents that cover biological know-how only insofar as it is incorporated into an innovative drug or a diagnostic device provide little, if any, practical protection for what is often a large component of the ingenuity and cost of the invention. The successful development of a pioneering drug reveals key information about the molecular mechanics of a disease and a good strategy for controlling it. Armed with that knowledge, competitors can then modify the drug’s chemistry just enough to dodge the pioneer’s patent and rush in with slightly different drugs developed at much lower cost. In the end, the pioneer can easily be the only player that fails to profit from its own pathbreaking work. A Japanese researcher extracted the first statin from a fungus and discovered that it inhibits an enzyme that plays a key role in cholesterol synthesis; a colleague established its efficacy in limited human trials in the late 1970s. Following that lead, others quickly found or synthesized slightly different statins that worked better and had fewer side effects. Pfizer’s Lipitor, which was licensed two decades later, became the most lucrative drug in history.

“Repurposing” presents the flip side of the same problem. Nature often uses the same or very similar molecules to perform different functions at different points in our bodies. These molecules may then be involved in what used to be viewed as different diseases, and the same drug can then be used to treat them all. But when independent researchers or practicing doctors discover the new use, they usually lack the resources to conduct the clinical trials needed to get the drug’s license amended to cover it and can’t, in any event, market the drug for the new use so long as the patent on its chemistry lasts. And for both independent researchers and drug companies themselves, there is no profit in searching for new uses for an old drug unless there is a profitable market ahead that will cover the cost of the search. The new use may be entitled to what is called a “method” patent, but the patent will easily be dodged if the original patent has expired and cheap generic copies of the drug are readily available for doctors to prescribe.

Similar valuable spillovers occur as drug companies and others catalog the genes that determine how drugs interact with molecular bystanders. Countless patients owe a large debt to those who established that genetic variations in one group of enzymes cause some people to metabolize antidepressants, anticoagulants, and about 30 other types of drugs too quickly, before the drugs have a chance to work, and cause others to metabolize them too slowly, allowing the drugs to accumulate to toxic levels. The discovery of a new drug-modulating gene will often help improve the prescription of other drugs, too. Here again, only a fraction of the value of working out the link between variations in a particular gene and a drug’s performance will be captured by the company that discovers the link and incorporates that information into a first drug’s label.

Innovative diagnostic services are particularly vulnerable to free riders because their sole purpose is to convey information. Soon after it became clear that the Supreme Court would probably invalidate Myriad’s gene patents, two academic researchers teamed up with a gene-testing company and a venture-capital firm to buy clinical records from doctors who have treated patients using reports supplied by Myriad. A website set up on the day that the Supreme Court released its Myriad ruling invites patients to give away the same data instead. Neither scheme will create any new know-how; to the extent that they succeed, both will simply replicate Myriad’s database.

Washington is now hatching plans to free all the biological information that (in Washington’s view) deserves to be free. It’s also trying to make sure that we don’t misunderstand or even bother collecting the information that doesn’t.

To facilitate the development of “a new taxonomy of human diseases based on molecular biology,” a 2011 report commissioned by the National Institutes of Health recommends creation of a broadly accessible “Knowledge Network” that will aggregate data spanning all the molecular, clinical, and environmental factors that can affect our health. Last June, the NIH expressed strong support for a plan to standardize the collection, analysis, and sharing of genomic and clinical data across 41 countries. The 2011 report endorses government-funded pilot programs; the curtailment of at least some existing intellectual property rights (though “guidelines for intellectual property need to be clarified and concerns about loss of intellectual property rights addressed”); and “strong incentives” to promote participation by payers and providers. Indeed, the government may “ultimately need to require participation in such Knowledge Networks for reimbursement of health care expenses.” And data-sharing standards must be framed to discourage “proprietary databases for commercial intent.”

Somewhat paradoxically, the report hastens to add that its proposals extend only to the “pre-competitive” phase of research, presumably to leave some room for intellectual property rights directly tied to diagnostic devices and drugs. But there is no such phase—in pursuit of better products and services, the private sector is already competing to do most everything that the report describes. And a new molecular taxonomy of disease is already emerging: a steadily growing number of yesterday’s diseases now come with prefixes or suffixes that designate some biochemical detail to distinguish different forms—for example, “ER+,” for “estrogen-receptor-positive,” in front of “breast cancer.”

Meanwhile, Washington’s paymasters are busy solidifying their control of much of the data gathering and sharing. The Preventive Services Task Force decides which screening tests must be fully covered for which classes of patients by all private insurance policies, and which should be skipped for either medical or cost reasons. What insurers may charge is regulated, too, so more money spent complying with screening mandates will inevitably mean less spent on disfavored tests and associated treatments. Washington also has ambitious plans to take charge of pooling and analyzing the data that emerge. A new national network, overseen by a national coordinator for health information technology, will funnel medical data from providers and patients to designated scientists, statisticians, and other public and private providers and insurers.

For its part, the FDA has made clear that it intends to maintain tight control of the diagnostic services and devices that enable individuals to read their own molecular scripts. In 2010, Walgreens abruptly canceled plans to sell a test kit called Insight when informed by the FDA that the kit lacked a license. The mail-in saliva-collection kit would have told you what your genes might have to say about dozens of things, among them Alzheimer’s, breast cancer, diabetes, blood disorders, kidney disease, heart attacks, high blood pressure, leukemia, lung cancer, multiple sclerosis, obesity, psoriasis, cystic fibrosis, Tay-Sachs, and going blind—and also how your body might respond to caffeine, cholesterol drugs, blood thinners, and other prescription drugs. Invoking its authority to license every diagnostic “contrivance,” “in vitro agent,” or “other similar or related article,” the FDA announced a crackdown on all companies that attempted to sell such things to the public without the agency’s permission. The agency is determined to protect consumers from what it considers to be “unsupported clinical interpretations” supplied by providers or medically inappropriate responses to diagnostic reports by consumers themselves.

There is much reason to doubt, however, that Washington is qualified to teach the rest of the country how to gather or analyze biological know-how. For the last 50 years, the FDA, by scripting the clinical trials required to get drugs licensed, has controlled how those trials investigate the molecular factors that determine how well different patients respond to the same drug. The trials have, in fact, investigated appallingly little, because the agency still clings to testing protocols developed long before the advent of modern molecular medicine (see “Curing Diversity,” Autumn 2008). A report released in September 2012 by President Obama’s Council of Advisors on Science and Technology (PCAST) urges the FDA to adopt “modern statistical designs” to handle new types of trials that would gather far more information about the molecular biology that controls the development of the disease and its response to drugs. In a speech given last May, Janet Woodcock, head of the FDA’s Center for Drug Evaluation and Research, acknowledged the need to “turn the clinical trial paradigm on its head.”

More generally, Washington has been a persistent (and often inept) laggard in moving its supervisory, transactional, and information-gathering and dissemination operations into the digital age. The FDA’s “incompatible” and “outdated” information-technology systems, the PCAST report notes, are “woefully inadequate.” The agency lacks the “ability to integrate, manage, and analyze data . . . across offices and divisions,” and the processing of a new drug submission may thus involve “significant manual data manipulation.” Other parts of Washington launched plans to push digital technology into the rest of the health-care arena around 2005, encouraged by a RAND Corporation estimate that doing so would save the United States at least $81 billion a year. A follow-up report released early this year concluded, in the words of one of its authors, that “we’ve not achieved the productivity and quality benefits that are unquestionably there for the taking.”

Washington launched the modern era of comprehensive genetic mapping when it started funding the Human Genome Project in the late 1980s. Rarely has the federal government found a better way to spend $3 billion of taxpayer money. But far more mapping has been done since then with private funds, and finishing the job is going to cost trillions, not billions—far more than Washington can pay. In a welcoming environment, Wall Street, venture capitalists, monster drug companies, small biotechs, research hospitals, and many others would be pouring intellect and money into this process. The biosphere offers unlimited opportunity for valuable innovation. The technology is new, fantastically powerful, and constantly improving; the demand for what it can supply is insatiable. But Washington’s heavy-handed control of both the science and the economics of the data acquisition, analysis, and distribution will drive much of the private money out of the market. We should instead give the market the property rights and pricing flexibility that will get the private money fully engaged and leading the way.

As the Supreme Court has recognized, a property right that curtails the use of basic science may foreclose too much innovation by others later on. The discovery of a promising target, for example, shouldn’t be allowed to halt all competitive development of molecules that can detect or modulate it. Instead, we need rights that will simultaneously promote private investment in the expensive process of reverse-engineering nature and the broad distribution of the know-how thus acquired, so as to launch a broad range of follow-up research and innovation and allow front-end costs to be spread efficiently across future beneficiaries.

Models for property rights framed to strike such a balance already exist. As it happens, federal law already grants drug companies a copyright of sorts: a “data exclusivity” right that, for periods up to 12 years, bars the manufacturers of generic drugs from hitching a free ride through the FDA by citing the successful clinical trials already conducted by the pioneer. Data exclusivity rights could easily be extended to cover the development of biological know-how as well, enabling the market to spread the cost of developing such knowledge across the broad base of future beneficiaries. The rights should be narrowly tailored to specific diseases, but they should also last a long time.

The agency that issues data exclusivity rights should also have the authority to oversee a compulsory licensing process analogous to the one that allows singers to perform songs composed by others—or to the government’s “march-in” right to license patents for inventions developed with government funding to “reasonable applicants” if the patent holder has failed to take “effective steps to achieve practical application of the subject invention” or, more broadly, as needed to address the public’s “health and safety needs.” Licensing fees should be modest, and total licensing fees collected might be capped at a level commensurate with the cost of developing the biological know-how and its role in creating new value thereafter.

In addition to mobilizing private capital to develop know-how, well-crafted property rights promote its broad and economically efficient distribution. To begin with, they make the know-how immediately available—though not exploitable for profit—to researchers and competitors. Copyrights require publication of the protected content; patents require a description complete enough to enable others in the field to exploit the “full scope” of the claimed invention. Going forward, Myriad and companies like it will instead treat their discoveries and databases as trade secrets and frame contracts to forbid disclosure of their data by the doctors and patients who use their services.

As entertainment and digital software and hardware markets have also demonstrated, markets protected by the right forms of intellectual property usually find ways to develop tiered pricing schemes that allow widespread distribution of information at economically efficient prices while the rights last. By contrast, current policies in the health-care arena load most of the cost of developing biological know-how onto the shoulders of the patients who buy the pioneering drug or device while its patent lasts. The market will develop more pioneer drugs—and government paymasters will be more willing to accept them—if one important component of the know-how costs can be spread efficiently across many more products, services, and patients.

The emergence of a robust market for molecular and clinical information will also draw an important new competitor into drug markets: the independent researcher or company that develops the biological science needed to select the most promising targets and to design successful clinical trials for new drugs. Drug companies will inevitably continue to play a large role in the search for patient-side information. But for much the same reason that software companies welcome advances in digital hardware, drug companies should welcome the rise of independent companies with expertise in connecting molecular effects to clinical ones in human bodies. Independent companies are permitted to interact much more freely with doctors and patients. And by accelerating the development and wider distribution of information about the molecular roots of health and disease, companies that specialize in predictive molecular diagnosis will help mobilize demand for the expeditious delivery of drugs.

Finally, private markets have established that we do not need a government takeover to maximize the value of information by consolidating it efficiently in large databases. For well over a century, competing companies have been pooling their patents and interconnecting their telegraph, phone, and data networks, online reservation systems, and countless other information conduits and repositories. Aggregators buy up portfolios of patents and licenses for songs, movies, TV shows, and electronic books, and then offer access in many different, economically efficient, ways to all comers. The global financial network depends on instant data sharing among cooperating competitors. Many online commercial services today hinge on the market’s willingness to interconnect and exchange information stored in secure, proprietary databases.

With appropriate property rights in place, the economics of the market for clinical data will, in all likelihood, be largely self-regulating. The accumulation of new data points steadily dilutes the value of the old, though their aggregate value continues to grow. New data can be acquired every time a patient is diagnosed or treated. The best way to accelerate the process that makes information ubiquitously and cheaply available is to begin with economic incentives that reward those who develop new information faster and come up with ways to distribute it widely at economically efficient prices.

Conspicuous by its frequent absence in debates about who owns biological know-how is the patient’s right not to disclose information about his or her innards to anyone—the corollary of which should be a right to give it away, share it, sell it outright, or license it selectively, at whatever price the market will bear. The individual patient won’t often be interested in haggling over what a clinical record might be worth to a hospital or an insurance company. But emphatically reaffirming the private ownership of private information is the first, essential step in creating markets for those who would help collect and analyze the data. It would also help set the stage for some serious constitutional challenges to Washington’s attempts to displace them.

Washington assures us that patient privacy will be protected when it pools medical records for analysis by Washington-approved researchers and statisticians—but its policies reflect the conviction that the patient’s privacy rights will be bought by whoever is paying for the care, whether Washington itself or a private insurance company operating under Washington’s thumb. The Health Insurance Portability and Accountability Act of 1996 does indeed require that records, before they are shared, must be redacted to ensure that they can’t be linked back to individual patients. But Washington’s guidelines on “de-identification” of private health-care data are naively optimistic about what that would require. In a paper published in Science last January, an MIT research team described how easily it had used a genealogy website and publicly available records to extract patients’ identities from ostensibly anonymous DNA data. Genetic profiles reveal gender, race, family connections, ethnic identity, and facial features; they correlate quite well with surnames. DNA fingerprinting following an arrest is now routine, and a close match with a relative’s DNA can provide an excellent lead to your identity. Further, state-run clinics and hospitals are exempt from the federal requirements, and some are already selling patient records to outsiders, sometimes neglecting to remove information such as age, zip codes, and admission and discharge dates.

Even if privacy were fully protected, a promise to “de-identify” data doesn’t give the government the right to seize and distribute private information in the first place. Washington, it appears, now aspires to create what is, for all practical purposes, a federal mandate to participate in an insurance system in which Washington minutely regulates which diagnostic tests are administered and which treatments are provided—and then requires the sharing of the data acquired in the course of treatment. It seems unlikely that such a system can withstand constitutional scrutiny. A government grab for information, whether direct or through private intermediaries, is subject to the Fourth Amendment, which places strict limits on such invasions of our privacy.

At the opposite pole, the FDA’s decision to protect the masses from unapproved clinical interpretations of molecular data, or to limit or bar access to the information on the ground that the masses aren’t smart enough to handle it wisely, also looks constitutionally vulnerable. Home-use diagnostic devices and services offer patients the absolute medical privacy that they are entitled to. And the Supreme Court has repeatedly concluded that freedom of speech includes a private right to listen, read, and study that is even broader than the right to speak, write, and teach. Enabling tools and technology—the printing press and ink that produce the newspaper, for example—enjoy the same constitutional protection. Lower courts have recently also affirmed the First Amendment right to discuss off-label drug prescriptions. A constitutional right to engage in officially unapproved discussion about what a drug might do to a patient’s body must surely also cover unapproved discussion about what the patient’s own genes and proteins might do.

If we let them, markets for biological know-how will do exactly what markets do best and deliver what molecular medicine most needs. Propelled as they are by dispersed initiative and private choice, free markets are uniquely good at extracting and synthesizing information that’s widely dispersed among innovators, investors, workers, and customers—“personal knowledge,” in the words of Michael Polanyi, the brilliant Hungarian-British chemist, economist, and philosopher. Nowhere could the free market’s information-extracting genius be more important than in a market for products whose value depends on their ability to mirror biochemical information inside the people who use them.

The Supreme Court’s Myriad ruling is a reasonable construction of the patent law as currently written. But that we now find ourselves pondering whether Washington should be taking full control of the private genetic and clinical data that individuals supply—and that private companies like Myriad are eager to collect and analyze—provides a chilling reminder of how rapidly we are slouching down the road to medical serfdom.

Peter W. Huber is a Manhattan Institute senior fellow and the author of the forthcoming The Cure in the Code: How 20th Century Law Is Undermining 21st Century Medicine.

19 Nov 10:38

A comparison of public and private schools in Spain using robust nonparametric frontier methods

by Cordero, José Manuel, Prior, Diego, Simancas Rodríguez, Rosa
This paper uses an innovative approach to evaluate educational performance of Spanish students in PISA 2009. Our purpose is to decompose their overall inefficiency between different components with a special focus on studying the differences between public and state subsidized private schools. We use a technique inspired by the non-parametric Free Disposal Hull (FDH) and the application of robust order-m models, which allow us to mitigate the influence of outliers and the curse of dimensionality. Subsequently, we adopt a metafrontier framework to assess each student relative to the own group best practice frontier (students in the same school) and to different frontiers constructed from the best practices of different types of schools. The results show that state-subsidised private schools outperform public schools, although the differences between them are significantly reduced once we control for the type of students enrolled in both type of centres.
19 Nov 10:38

Applying the Capability Approach to the French Education System: An Assessment of the "Pourquoi pas moi ?"

by André, Kévin
This paper attempts to re-examine the notion of equality, going beyond the classic opposition in France between affirmative action and meritocratic equality. Hence, we propose shifting the French debate about equality of opportunities in education to the question of how to raise equality of capability. In this paper we propose an assessment based on the capability approach of a mentoring programme called 'Une grande école: pourquoi pas moi?' ('A top-level university: why not me?' (PQPM) launched in 2002 by a top French business school. The assessment of PQPM is based on the pairing of longitudinal data available for 324 PQPM students with national data. Results show that the 'adaptive preferences' of the PQPM students change through a process of empowerment. Students adopt new 'elitist' curricula but feel free to follow alternative paths.
04 Nov 10:52

Political Learning and Officials’ Motivations: An Empirical Analysis of the Education Reform in the State of São Paulo

by Thomaz M. F. Gemignani, Ricardo de Abreu Madeira
We investigate the occurence of social learning among government officials in a context of decentralization of political responsibilities - the schooling decentralization reform of the state of São Paulo - and use it to analyze officials' motivations driving the adhesion to that program. We explore how the information exchange about the newly adopted tasks is configured and which aspects about the returns of decentralization are mostly valued by officials in their learning process. In particular, we try to determine to what extent the adhesion to the reform was due to electoral motivations or, rather, to concerns about the quality of public education provision. We present evidence that social learning configures a relevant factor in the reform implementation and find that mayors are more likely to adhere to the program upon the receipt of good news about the electoral returns of decentralization. On the other hand, experiences by information neighbors that turn out to be successful in improving the public provision seem to be ignored in mayors' decisions for decentralization. The argument for electoral motivations is further supported by evidence that officials tend to be more responsive to information transmitted by neighbors affiliated to the same party as their own.
31 Oct 18:32

Do More Educated Leaders Raise Citizens' Education?

by Diaz-Serrano, Luis, Pérez, Jessica
This paper looks at the contribution of political leaders to enhance citizens' education and investigate how the educational attainment of the population is affected while a leader with higher education remains in office. For this purpose, we consider educational transitions of political leaders in office and find that the educational attainment of population increases when a more educated leader remains in office. Furthermore, we also observe that the educational attainment of the population is negatively impacted when a country transitions from an educated leader to a less educated one. This result may help to explain the previous finding that more educated political leaders favor economic growth.
07 Oct 22:28

Memo to Ed Miliband: My Marxist father was wrong, too

by Steve

The Daily Mail is involved in a dispute with Ed Miliband over a recent article in the paper that accused Miliband’s father Ralph, a Marxist academic, of hating Britain. Miliband says he disagrees with the views of his father, who died in 1994, but says his father loved Britain. Dalrymple has written a short but powerful article for the Telegraph that reminds readers of the troubling emotional and moral foundations of Marxists like Ralph Miliband. I should know, Dalrymple says, because my father was Marxist too:

I saw that his concern for the fate of humanity in general was inconsistent with his contempt for the actual people by whom he was surrounded, and his inability to support relations of equality with others. I concluded that the humanitarian protestations of Marxists were a mask for an urge to domination.

In addition to the emotional dishonesty of Marxism, I was impressed by its limitless resources of intellectual dishonesty. Having grown up with the Little Lenin Library and (God help us!) the Little Stalin Library, I quickly grasped that the dialectic could prove anything you wanted it to prove, for example, that killing whole categories of people was a requirement of elementary decency.

02 Oct 18:04

Bayesian Orgulity

Gordon Belot
Philosophy of Science, Volume 80, Issue 4, Page 483-503, October 2013.
23 Sep 22:42

Epigenetics smackdown at the Guardian

by whyevolutionistrue

Well, since the tussle about epigenetics involves Brits, they’re really too polite to engage in a “smackdown.” Let’s just call it a “kerfuffle.” Nevertheless, two scientists have an enlightening 25-minute discussion about epigenetics at the Guardian‘s weekly science podcast (click the link and listen from 24:30 to 49:10). If you’re science friendly and have an interest in this ‘controversy,’ by all means listen in. It’s a good debate about whether “Lamarckian” inheritance threatens to overturn the modern theory of evolution.

Readers know how I feel about the epigenetics “controversy.” “Epigenetics” was once a term used simply to mean “development,” that is, how the genes expressed themselves in a way that could construct an organism. More recently, the term has taken on the meaning of “environmental modifications of DNA,” usually involving methylation of DNA bases.  And that is important in development, too, for such methylation is critical in determining how genes work, as well as in how genes are differentially expressed when they come from the mother versus the father.

But epigenetics has now been suggested to show that neo-Darwinism is wrong: that environmental modifications of the DNA—I’m not referring to methylation that is actually itself coded in the DNA—can be passed on for many generations, forming a type of “Lamarckian” inheritance that has long been thought impossible.  I’ve discussed this claim in detail and have tried to show that environmentally-induced modifications of DNA are inevitably eroded away within one or a few generations, and therefore cannot form a stable basis for evolutionary adaptation.  Further, we have no evidence of any adaptations that are based on modifications of the DNA originally produced by the environment.

In the Guardian show, the “Coyne-ian” position is taken by Dr. George Davey Smith, a clinical epidemiologist at the University of Bristol.  The “epigenetics-will-revise-our-view-of-evolution” side is taken by Dr. Tim Spector, a genetic epidemiologist at King’s College. Smith makes many of the points that I’ve tried to make over the past few years, and I hope it’s not too self-aggrandizing to say that I think he gets the best of Spector, who can defend the position only that epigenetic modification is important within one generation (e.g., cancer) or at most between just two generations.

But listen for yourself. These guys are more up on the literature than I am, and I was glad to see that, given Smith’s unrebutted arguments, neo-Darwinism is still not in serious danger. (I have to say, though, that I’d like to think that if we found stable and environmentally induced inheritance that could cause adaptive changes in the genome, I’d be the first to admit it.)

h/t: Tony


25 Aug 13:22

"The People Want the Fall of the Regime": Schooling, Political Protest, and the Economy

by Filipe Campante
We provide evidence that economic circumstances are a key intermediating variable for understanding the relationship between schooling and political protest. Using the World Values Survey data, we find that individuals with higher levels of schooling, but whose income outcomes fall short of that predicted by a comprehensive set of their observable characteristics, in turn display a greater propensity to engage in protest activities. We argue that this evidence is consistent with the idea that a decrease in the opportunity cost of the use of human capital in labor markets encourages its use in political activities instead, and is unlikely to be explained solely by either a pure grievance effect or by self-selection. We then show separate evidence that these forces appear to matter too at the country level: Rising education levels coupled with macroeconomic weakness are associated with increased incumbent turnover, as well as subsequent pressures toward democratization.
04 Aug 21:13

Quotation of the Day…

by Don Boudreaux

… is from Parker T. Moon’s 1928 volume, Imperialism and World Politics; it’s a passage quoted often by Tom Palmer – for example, on page 418 of Tom’s 1996 essay “Myths of Individualism,” which is reprinted in Toward Liberty (David Boaz, ed., 2002):

Language often obscures truth.  More than is ordinarily realized, our eyes are blinded to the facts of international relations by tricks of the tongue.  When one uses the simple monosyllable “France” one thinks of France as a unit, an entity.  When to avoid awkward repetition we use a personal pronoun in referring to a country – when for example we say “France sent her troops to conquer Tunis” – we impute not only unity but personality to the country.  The very words conceal the facts and make international relations a glamorous drama in which personalized nations are the actors, and all too easily we forget the flesh-and-blood men and women who are the true actors.  How different it would be if we had no such word as “France,” and had to say instead – thirty-eight million men, women and children of very diversified interests and beliefs, inhabiting 218,000 square miles of territory!  Then we should more accurately describe the Tunis expedition in some such way as this: “A few of these thirty-eight million persons sent thirty thousand others to conquer Tunis.”  This way of putting the fact immediately suggests a question, or rather a series of questions.  Who are the “few”?  Why did they send the thirty thousand to Tunis?  And why did these obey?

Moon’s point applies also, of course, to trade.  When we say, for example, that “America trades with China” we too easily overlook the fact that what’s going on is nothing other than some number of flesh-and-blood individuals living in, or citizens of, a geo-political region that we today conventionally call “America” voluntarily buy and sell with a number of flesh-and-blood individuals living in, or citizens of, a geo-political region that we today conventionally call “China.”

Nothing - nothing at all - about such exchanges differs in any economically relevant way from exchanges that take place exclusively among Americans or from exchanges that take place exclusively among the Chinese.  Any downsides or upsides that you might identify as a result of Americans trading with the Chinese exist when Americans trade with Americans or when the Chinese trade with the Chinese.  Yet personifying the collective masks this reality that all exchange is carried out by flesh-and-blood individuals, and that nothing about having those exchanges occur across man-drawn political boundaries is economically relevant.

02 Aug 18:08

More On Detroit …

by admin

A few thoughts.

  1. Amazingly, there some who believe that Detroit’s problem were caused by too-little govt., or least something other than what it was
  2. I think even the hard-core leftists at the federal level understand the moral-hazard disaster that would accompany a federal bailout of Detroit
  3. I wish people wouldn’t refer to leftist/statists as “liberals”

Will’s column is excellent.

George F. Will: Detroit’s death by democracy – The Washington Post: uto industry executives, who often were invertebrate mediocrities, continually bought labor peace by mortgaging their companies’ futures in surrenders to union demands. Then city officials gave their employees — who have 47 unions, including one for crossing guards — pay scales comparable to those of autoworkers. Thus did private-sector decadence drive public-sector dysfunction — government negotiating with government-employees’ unions that are government organized as an interest group to lobby itself to do what it wants to do: Grow.

Related links:
A Look Inside Detroit, Bus Edition …

27 Jul 08:17

Special Rules For The Rulers, Alcohol Edition

by admin

All animals are equal, but some animals are more equal than others.” — George Orwell

LCBO’s new ‘simplified pricing formula’ gives diplomats, federal government 49% discount on booze | National Post: OTTAWA — Ontario’s liquor board has sweetened an already sweet deal for the federal government and foreign diplomats as it chops the prices they pay for beer, wine and booze almost in half. Late last month, the Liquor Control Board of Ontario began offering its products to federal departments and agencies at a 49% discount from the retail price that everyone else pays. The cut rate on alcoholic beverages is also available to foreign embassies, high commissions, consulates and trade missions, most of whom are located in the Ottawa area, within the LCBO’s jurisdiction. And the favoured buyers will be exempt from an LCBO policy dating from 2001 that sets minimum prices for products, under its “social responsibility” mandate.

15 Jul 22:53

My Brain Made Me Do It? (More on X-phi and bypassing)

by Eddy Nahmias

I’d already written up this post, and it raises some of the issues that are being discussed in the previous post’s thread, so I figured I’d post it now and then respond to comments and critiques to both posts as the week progresses.  (Plus we’re putting up follow-up experiments this week, so if anyone has helpful criticisms, we might try to address them.)

So, here’s a summary of my latest x-phi results on what people say about the possibility of perfect prediction based on neural activity.  The goal was not to address traditional philosophical debates (head on) but to pose challenges to a claim often made by “Willusionists” (people who claim that modern mind sciences, such as neuroscience, show that free will is an illusion).  Willusionist arguments typically have a suppressed (definitional) premise that says “Free will requires that not X” and then they proceed to argue that science shows that X is true for humans.  (In this forthcoming chapter I argue that Willusionists are often unclear about what they take X to be—determinism, physicalism, epiphenomenalism, rationalization—and that the evidence they present is not sufficient to demonstrate X for any X that should be taken to threaten free will.)

The definitional premise in these arguments is usually asserted based on Willusionists’ assumptions about what ordinary people believe (or sometimes their interpretation of an assumed consensus among philosophers).  I think their assumptions are typically wrong.  For instance, Sam Harris in his book Free Will argues that ordinary people would see that neuroscience threatens free will once they recognized that it allows in principle perfect prediction of decisions and behavior based on neural activity, even before people are aware of their decisions, and he gives a detailed description of such a scenario (on pp. 10-11).   

With former students Jason Shepard (now at Emory psych), Shane Reuter (now at WashU PNP), and Morgan Thompson (going to Pitt HPS), we tested Harris’ prediction.  I will provide the complete scenario we used in the first comment below.  

The basic idea was to develop Harris’ scenario in full detail.  We explain the possibility of a neuroimaging cap that would provide neuroscientists information about a person’s brain activity sufficient to predict with 100% accuracy everything the person will think and decide before she is aware of thinking or deciding it.  They do this while Jill wears the cap for a month (of course they predict what she’ll do even when she tries to trick them).  Along with everything else she decides or does, the neuroscientists predict how she will vote for Governor and President.  (In one version the device also allows the neuroscientists to alter Jill’s brain activity and they change her vote for Governor, but not President, without her awareness of it.) 

On the one hand, our scenario does not suggest that people’s (conscious) mental activity is bypassed (or causally irrelevant) to what they decide and do.  On the other hand, it is difficult to see how to interpret it such that it allows a causal role for a non-physical mind or soul (or perhaps for Kane-style indeterminism in the brain at torn decisions or agent-causal powers).  The scenario concludes: “Indeed, these experiments confirm that all human mental activity is entirely based on brain activity such that everything that any human thinks or does could be predicted ahead of time based on their earlier brain activity.”  (Jason, Shane, and I plan to do follow up experiments with scenarios that more explicitly rule out dualism or non-reductionism and ones that are explicitly dualist, and also use ones that include moral decisions.  See first comment.)

Some highlights of our results (using 278 GSU undergrads, though I’ve also run it on some middle school students for a Philosophy Outreach class and their responses follow the same patterns; we may try it on an mTurk sample too):

80% agree that “It is possible this technology could exist in the future”.  This surprised me since there are so many reasons one might think it couldn’t be developed.  Of the 20% who disagreed, only a handful mentioned the mind or soul or free will; instead, most mentioned mundane things like people not allowing it to be developed or technological difficulties.  Responses to this possibility question didn’t correlate with responses to those described below.

The vast majority (typically 75-90%) responded that this scenario does not conflict with free will or responsibility regarding (a) Jill’s non-manipulated vote, (b) regarding Jill’s actions in general while wearing the scanner, or (c) regarding people if this technology actually existed, except this one was lower in the scenario where neuroscientists could manipulate.  Mean scores were typically around 6 on a 7-point scale, though they were lower for the same statements in the case where the neuroscientists could manipulate decisions but didn’t.  Means were similar for questions about having choices, making choices, and deserving blame.

Responses were markedly different for the decisions that were in fact manipulated by the neuroscientist (in the 2.3 range).  This is not surprising but at least it shows people are paying attention (questions were intermixed), and that they are not just offering ‘free will no matter what’ judgments in the other cases.

Responses to “bypassing “questions partially mediated responses to free will and responsibility questions.  That is, the degree to which people (dis)agreed with statements like, “Jill’s reasons had no effect on how she voted for Governor” or “If this technology existed, then people’s reasons would have no effect on what they did” partially explained their responses to statements about free will and responsibility for Jill or for people.

We also asked whether people agreed with this statement: “If this technology existed, it would show that each decision a person makes is caused by particular brain states.”  Even though the scenario does not explicitly discuss neural causation, 2/3 agreed and responses to this statement did not correlate with responses to statements about free will and responsibility. 

Now, we think these results falsify the armchair prediction made by Harris (and suggested by other Willusionists) about what ordinary people believe about free will.  And I also think they provide support for the idea that most people are amenable to a naturalistic understanding of the mind and free will, or better, that they have what I call a ‘theory-lite’ understanding (with Morgan, I develop this idea in this paper, which is a companion piece to this paper by Joshua Knobe—feel free to broaden the discussion to these papers if you look at them, and I’ll have to say more about the 'theory-lite' view in response to Thomas’ questions).

But rather than further describing our interpretations of these results (and some potential objections to them and data problematic for them, such as the data presented by Thomas in comments), which I will do in the comments, let me start by asking y’all what you think these results show, if anything, about people’s beliefs, theories, and intuitions about free will, responsibility, the mind-body relation, etc.  And to the extent that you think they don’t show much, why is that?  Are there variations of the scenarios (or questions) that you think would help us show more? (Feel free to throw in critiques of x-phi along the way!)

09 Jul 10:42

Su Shi hopes his son is stupid

by Sedulia

Screen shot 2013-05-13 at 18.03.05

Families, when a child is born, want it to be intelligent.
I, through intelligence
having wrecked my whole life,
only hope the baby will prove
ignorant and stupid.
Then he will crown a tranquil life
by becoming a cabinet minister.

   --Chinese poet 蘇軾 Su Shi (1037-1101), also known as Su Dongpo, on the birth of his son. Translation by Arthur Waley (1889-1966).

人皆生子望聰明
我被聰明誤一 生
但原吾兒魯且愚
無災無難到公卿。