Shared posts

06 Jan 20:25

Trade Liberalization and Wage Inequality: New Insights from a Dynamic Trade Model with Heterogeneous Firms and Comparative Advantage

by Wolfgang Lechthaler, Mariya Mileva
We develop a dynamic general equilibrium trade model with comparative advantage, heterogeneous firms, heterogeneous workers and endogenous firm entry to study wage inequality during the adjustment after trade liberalization. We find that trade liberalization increases wage inequality both in the short run and in the long run. In the short run, wage inequality is mainly driven by an increase in inter-sectoral wage inequality, while in the medium to long run, wage inequality is driven by an increase in the skill premium. Incorporating worker training in the model considerably reduces the effects of trade liberalization on wage inequality. The effects on wage inequality are much more adverse when trade liberalization is unilateral instead of bilateral or restricted to specific sectors instead of including all sectors
10 Jan 15:36

Blame Canada: Toronto university sparks fracas by supporting student’s refusal to work with women

by whyevolutionistrue
Readers Diana and Lynn, both Canadians, called my attention to a news item from Ontario—and a public debate—taking place about the conflict between state and religion. It’s every bit as portentous as the burqa debate in Europe (France, by the way, just upheld its ban on the burqa by convicting a wearer), but doesn’t seem to have gotten on the international radar screen.
The issue has been reported by the CBC, the Star, and the National Post, but the quotes (indented) are from the Star, whose coverage is most complete. This took place at York University in Toronto, which, like most universities in Canada, is a public school.
In short, a male student of unknown faith (the possibilities include Orthodox Judaism or Islam, but I suspect the latter) asked to opt out of participating in a sociology class’s focus group because it included women—and associating with women violates his religion. The professor refused this request as outrageous, but—and here’s the kicker—the York administration ordered the prof to comply.  The professor, Dr. Paul Grayson, blew the whistle on his administration. That is a brave guy!
Here are the details:

The brouhaha began in September when a student in an online sociology class emailed Grayson about the class’s only in-person requirement: a student-run focus group.

“One of the main reasons that I have chosen internet courses to complete my BA is due to my firm religious beliefs,” the student wrote. “It will not be possible for me to meet in public with a group of women (the majority of my group) to complete some of these tasks.”

While Grayson’s gut reaction was to deny the request, he forwarded the email to the faculty’s dean and the director for the centre for human rights.

Their response shocked him; the student’s request was permitted.

The reasoning was apparently that students studying abroad in the same online class were given accommodations, and allowed to complete an alternative assignment.

“I think Mr. X must be accommodated in exactly the same way as the distant student has been,” the vice dean wrote to Grayson.

That, of course, is insane, because the student was not overseas and his refusal was due not to the inability to travel to Canada, but because he just didn’t want to work with women. And Grayson, in his reply to the dean, pulled no punches:

“York is a secular university. It is not a Protestant, Catholic, Jewish, or Moslem university. In our policy documents and (hopefully) in our classes we cling to the secular idea that all should be treated equally, independent of, for example, their religion or sex or race.

“Treating Mr. X equally would mean that, like other students, he is expected to interact with female students in his group.”

In a masterpiece of political correctness, the University stuck to its guns:

A university provost, speaking on behalf of the dean, said the decision to grant the student’s request was made after consulting legal counsel, the Ontario Human Rights Code and the university’s human rights centre.

“Students often select online courses to help them navigate all types of personal circumstances that make it difficult for them to attend classes on campus, and all students in the class would normally have access to whatever alternative grading scheme had been put in place as a result of the online format,” said Rhonda Lenton, provost and vice president academic.

The director of the Centre for Human Rights also weighed in on the decision in an email to Grayson.

“While I fully share your initial impression, the OHRC does require accommodations based on religious observances.”

Well, perhaps it does, but religious accommodations must give way when they conflict with the public good, and this is a public university. Refusing to associate with women is nothing other than an attempt to cast them as second-class citizens, and that human right trumps whatever misogyny is considered a “religious right.” If Mr. X wants to go to a synagogue in which women must sit in the back, or a mosque in which women can’t pray with men, that is his right, but he doesn’t have any right to make a public university accommodate that lunacy, any more than University College London can enforce gender-segregated seating at public lectures.

What’s the logical outcome of this kind of pandering to religion? Grayson again gave the university no quarter:

The professor argued that if a Christian student refused to interact with a black student, as one could argue with a skewed interpretation of the Bible, the university would undoubtedly reject the request.

“I see no difference in this situation,” Grayson wrote.

The interesting thing is that after hearing from the dean, Grayson (not knowing the student’s religion) consulted both Orthodox Jewish and Islamic scholars at York, who both told him that there was no bar to associating with women in their faiths so long as there was no physical contact. On that basis, Grayson and his colleagues in the sociology department refused the student’s request.

In the end, the student gave in. That might be the end of it, but Grayson still may face disciplinary action (like him, though, I doubt it). Who looks bad here is the university, which would even consider granting such a request.

Apparently this kind of clash between religious and secular values is not unique in Canadian education. As the Star reports:

The incident is the latest clash between religious values and Ontario’s secular education system.

Catholic schools resisted a call by Queen’s Park to allow so-called gay-straight student clubs because of the Vatican’s historic stand against homosexuality. But the government insisted such clubs be permitted as a tool against bullying — and a nod to Ontario’s commitment to freedom of sexual orientation.

Similar debate erupted in 2011 when a Toronto school in a largely Muslim neighbourhood allowed a Friday prayer service in the school cafeteria so that students would not leave for the mosque and not return.

However fewer cases have taken place at the post-secondary level.

Well, public schools are public schools, and they’re all supported by taxpayers. Just like a public university cannot teach creationism as science in the U.S. (at least at Ball State University), so a public university in Canada cannot discriminate against women, even in the name of catering to religious faith. How can the government insist that Catholic schools accept “gay-straight” clubs on the grounds of supporting freedom of sexual orientation, yet allow a student, also on religious grounds, to discrimiante against women?

There is no end to crazy religious beliefs, and I see no reason why basic human rights should be abrogated to cater to all those beliefs. The administration of York University now has egg on its face, and Professor Grayson is the hero.

There is now a Care 2 petition that you can sign directed to Martin Singer, Dean of the Faculty of Liberal Arts & Professional Studies, and Noël A. J. Badiou, Director at York University’s Centre for Human Rights, those who supported the student’s right to refuse to associate with women. It reads, in part:

We the undersigned stand up for women’s and men’s equality, as enshrined in the Canadian Charter of Rights and Freedoms, and the Ontario Human Rights Code.

The statements and decisions made in this matter by Mr. Singer and Mr. Badiou suggest that they believe gender equality is subordinate to religious beliefs. We urge York University to retract this and re-affirm their stand on gender equality and women’s rights.

You don’t have to be Canadian to sign it, and right now there are only 453 signatures. They’re aiming for 1,000, so if you agree, head over to this link and add your name.


01 Jan 00:00

Experimental Evidence on the Effects of Home Computers on Academic Achievement among Schoolchildren

by Fairlie, Robert W., Robinson, Jonathan
Computers are an important part of modern education, yet many schoolchildren lack access to a computer at home. We test whether this impedes educational achievement by conducting the largest-ever field experiment that randomly provides free home computers to students. Although computer ownership and use increased substantially, we find no effects on any educational outcomes, including grades, test scores, credits earned, attendance and disciplinary actions. Our estimates are precise enough to rule out even modestly-sized positive or negative impacts. The estimated null effect is consistent with survey evidence showing no change in homework time or other "intermediate" inputs in education.
30 Dec 08:31

Revisionism vs. Classical Compatiblism, part II

by Joseph Campbell

In conversation, Bert Baumgaertner (UI) and Michael Goldsby (WSU) have suggested another way of contrasting my view with Vargas’ view.

First, there is revisionism 1.0: Who cares what the folk think? What matters is what philosophers think, for “free will” is a term of art. Next there is revisionism 1.1: The folk conception of free will is different from the concept we should adopt but we should adopt philosophical compatibilism. This is Vargas’ view. Lastly, there is revisionism 1.2: The folk conception of free will is generally speaking the concept we should adopt but it needs some tweaking here and there. This is Nahmias’ view. Ultimately the difference between Vargas and me is that I lean more toward revisionism 1.2 and even 1.0 than revisionism 1.1.

Manuel suggests that version 1.0 isn't a version of revisionism at all. Perhaps it is best characterized as anti-revisionist and the real debate is between versions 1.1 and 1.2.

Here is where I get confused and uncomfortable. I want to say that there is a core concept of free will that is aligned with sourcehood or the ability to do otherwise or whatever -- something neutral wrt to the compatibility problem. Any supposed connections between the core concept and incompatibilism (or compatibilism, for that matter) are the result of fallacious reasoning.

Can I say this? It isn't clear what would count as evidence for such a claim.

11 Dec 15:08

Peter Tse's The Neural Basis of Free Will: An Overview

by Thomas Nadelhoffer

A while back I posted an exchange between Peter Tse and Neil Levy that focused on parts of Peter's new book, The Neural Basis of Free Will: Criterial Causation.  In the wake of that discussion, I asked Peter if he would be interested in writing up an accessible overview of the argument he develops in the book.  Fortunately, he was happy to oblige!  The following is what he sent me to post here on Flickers.  Given the intersection between work in neuroscience and work on the philosophy of action, I think we all need to work a little harder to understand what's happening on the other half of this discplinary divide. In that spirit, I have posted Peter's overview below the fold.  Hopefully, everyone will join the ensuing discussion.  If he's right, then free will skeptics like me have some work to do!

Absract: In my book I use recent developments in neuroscience to show how volitional mental events can be causal within a physicalist paradigm. (1) I begin by attacking the logic of Jaegwon Kim’s exclusion argument, according to which mental information cannot be causal of physical events. I argue that the exclusion argument falls apart if indeterminism is the case. If I am right, I must still build an account of how mental events are causal in the brain. To that end I take as my foundation (2) a new understanding of the neural code that emphasizes rapid synaptic resetting over the traditional emphasis on neural spiking. (3) Such a neural code is an instance of ‘criterial causation,’ which requires modifying standard interventionist conceptions of causation. A synaptic reweighting neural code provides (4) a physical mechanism that accomplishes downward informational causation, (5) a middle path between determinism and randomness, and (6) a way for mind/brain events to turn out otherwise. This ‘synaptic neural code’ allows a constrained form of randomness parameterized by information realized in and set in synaptic weights, which in turn allows physical/informational criteria to be met in multiple possible ways when combined with an account of how randomness in the synapse is amplified to the level of randomness in spike timing. This new view of the neural code also provides (7) a way out of self-causation arguments against the possibility of mental causation. It leads to (8) an emphasis on deliberation and voluntary attentional manipulation as the core of volitional mental causation rather than, say, the correlates of unconscious premotor computations seen in Libet’s readiness potentials. And this new view of the neural code leads to (9) a new theory of the neural correlates of qualia as the ‘precompiled’ informational format that can be manipulated by voluntary attention, which gives qualia a causal role within a physicalist paradigm. I elaborate each of these ideas in turn below.

 

(1) Countering Kim’s exclusion argument. The exclusion argument is, roughly, that the physical substrate does all the causal work that the supervenient mental state is supposed to do, so mental or informational events can play no causal role in material events. On Kim’s reductionistic view, all causation seeps away to the rootmost physical level, i.e. particles or strings. Add to that an assumption of determinism, and the laws of physics applicable at the rootmost level are sufficient to account for event outcomes at that level and every level that might supervene on that level. So informational causation, including voluntary mental causation or any type of free will that relies on it, is ruled out.

I argue that indeterminism undermines this sufficiency, so provides an opening whereby physically realized mental events could be downwardly causal. I argue that biological systems introduced a new kind of physical causation into the universe, one based upon triggering physical actions in response to detected spatiotemporal patterns in energy. This is a very different kind of causation than traditional Newtonian conceptions of the causal attributes of energy, such as mass, momentum, frequency or position, which seem to underlie deterministic and exclusionary intuitions. But patterns, unlike amounts of energy, lack mass and momentum and can be created and destroyed. They only become causal if there are physical detectors that respond to some pattern in energetic inputs. Basing causal chains upon successions of detected patterns in energy, rather than the transfer of energy among particles, opens the door not only to informational downward causation but to causal chains (such as mental causal chains or causal chains that might underlie a game of baseball or bridge) that are not describable by or solely explainable by the laws of physics applicable at the rootmost level. Yes, a succession of patterns must be realized in a physical causal chain that is consistent with the laws of physics, but many other possible causal chains that are also consistent with physical laws are ruled out by informational criteria imposed on indeterministic particle outcomes. Physical/informational criteria set in synaptic weights effectively sculpt informational causal chains out of the ‘substrate’ of possible physical causal chains.

(2) A new view of the neural code: I develop a new understanding the neural code that emphasizes rapid and dynamic synaptic weight resetting over neural firing as the core engine of information processing in the brain. The neural code is not solely a spike code, but a code where information is transmitted and transformed by flexibly and temporarily changing synaptic weights on a millisecond timescale. One metaphor is the rapid reshaping of the mouth (analogous to rapid, temporary synaptic weight resetting) that must take place just before vibrating air (analogous to spike trains) passes through, if information is to be realized and communicated. What rapid synaptic resetting allows is a moment by moment changing of the physical and informational parameters or criteria that have to be met before a neuron will fire. This dictates what information neurons will be responsive to and what they will ‘say’ to one another from moment to moment.

(3) Rethinking interventionist models of causation: Standard interventionist models of causation manipulate A to determine what effects, if any, there might be on B and other variables. If instead of manipulating A's output, we manipulate the criteria, parameters or conditions that B places on A's input, which must be satisfied before B changes or acts, then changes in B do not follow passively from changes in A as they would if A and B were billiard balls. Inputs from A can be identical, but in one case B changes in response to A, and in another it does not. This constant reparameterization of B is what neurons do when they change each other's synaptic weights. What I call "criterial causation" emphasizes that what can vary is either outputs from A to other nodes, or how inputs from A are decoded by receiving nodes. On this view, standard interventionist and Newtonian models of causation are a special case where B places no conditions on input from A. But the brain, if anything, emphasizes causation via reparameterization of B, by, for example, rapidly changing synaptic weights on post-synaptic neurons.

(4) How downward causation works: Downward causation means that events at a supervening level can influence outcomes at the rootmost level. In this context it would mean that information could influence particle paths. While it would be impossible self-causation if a supervening event changed its own present physical basis, it is not impossible that supervening events, such as mental information, could bias future particle paths. How might this work in the brain? The key pattern in the brain to which neurons respond is temporal coincidence. A neuron will only fire if it receives a certain number of coincident inputs from other neurons. Criterial causation occurs where physical criteria imposed by synaptic weights on coincident inputs in turn realize informational criteria for firing. This permits information to be downwardly causal regarding which indeterministic events at the rootmost level will be realized; Only those rootmost physical causal chains that meet physically realized informational criteria can drive a postsynaptic neuron to fire, and thus become causal at the level of information processing. Typically the only thing that the set of all possible rootmost physical causal chains that meet those criteria have in common is that they meet the informational criteria set. To try to cut information out of the causal picture here is a mistake; The only way to understand why it is that just this subset of possible physical causal chains—namely those that are also informational causal chains—can occur, is to understand that it is informational criteria that dictate that class of possible outcomes.

The information that will be realized when a neuron’s criteria for firing have been met is already implicit in the set of synaptic weights that impose physical criteria for firing that in turn realize informational criteria for firing. That is, the information is already implicit in these weights before any inputs arrive, just as what sound your mouth will make is implicit in its shape before vibrating air is passed through. Assuming indeterminism, many combinations of possible particle paths can satisfy given physical criteria, and many more cannot. The subset that can satisfy the physical criteria needed to make a neuron fire is also the subset that can satisfy the informational criteria for firing (such as ‘is a face’) that those synaptic weights realize. So sets of possible paths that are open to indeterministic elementary particles which do not also realize an informational causal chain are in essence “deselected” by synaptic settings by virtue of the failure of those sets of paths to meet physical/informational criteria for the release of a spike.

(5) Between determinism and randomness: Hume (1739) wrote “’tis impossible to admit of any medium betwixt chance and an absolute necessity.” Many other philosophers have seen no middle path to free will between the equally ‘unfree’ extremes of determinism and randomness. They have either concluded that free will does not exist, or tried to argue that a weak version of free will, namely, ‘freedom from coercion,’ is compatible with determinism.

A strong conception of free will, however, is not compatible with either determined or random choices, because in the determined case there are no alternative outcomes and things cannot turn out otherwise, while in the random case what happens does not happen because it was willed. A strong free will requires meeting some high demands: Beings with free will (a) must have information processing circuits that have multiple courses of physical or mental activity open to them; (b) they must really be able to choose among them; (c) they must be or must have been able to have chosen otherwise once they have chosen; and (d) the choice must not be dictated by randomness alone, but by the informational parameters realized in those circuits. This is a tough bill to fill, since it seems to require that acts of free will involve acts of self-causation.

Criterial causation offers a middle path between the two extremes of determinism and randomness that Hume was not in a position to see, namely, that physically realized informational criteria parameterize what class of neural activity can be causal of subsequent neural events. The information that meets preset physical/informational criteria may be random to a degree, but it must meet those criteria if it is to lead to neural firing, so is not utterly random. Preceding brain activity specifies the range of possible random outcomes to include only those that meet preset informational criteria for firing.

(6) How brain/mind events can turn out otherwise: The key mechanism, I argue, whereby atomic level indeterminism has its effects on macroscopic neural behavior is that it introduces randomness in spike timing. There is no need for bizarre notions such as consciousness collapsing wave packets or any other strange quantum effects beyond this. For example, quantum level noise expressed at the level of individual atoms, such as single magnesium atoms that block NMDA receptors, is amplified to the level of randomness and near chaos in neural and neural circuit spiking behavior. A single photon can even trigger neural firing in a stunning example of amplification from the quantum to macroscopic domains. The brain evolved to harness such ‘noise’ for information processing ends. Since the system is organized around coincidence detection, where spike coincidences (simultaneous arrival of spikes) are key triggers of informational realization (i.e. making neurons fire that are tuned to particular informational criteria), randomizing which incoming spike coincidences might meet a neuron's criteria for firing means informational parameters can be met in multiple ways just by chance.

(7) Skirting self-causation: A synaptic account of the neural code also gets around some thorny problems of self-causation that have been used to argue against the possibility of mental causation. The traditional argument is that a mental event realized in neural event x cannot change x because this would entail impossible self-causation. Criterial causation gets around this by granting that present self-causation is impossible. But it allows neurons to alter the physical realization of possible future mental events in a way that escapes the problem of self-causation of the mental upon the physical. Mental causation is crucially about setting synaptic weights. These serve as the physical grounds for the informational parameters that must be met by unpredictable future mental events.

(8) Voluntary attention and free will: I argue that the core circuits underlying free choice involve frontoparietal circuits that facilitate deliberation among options that are represented and manipulated in executive working memory areas. Playing out scenarios internally as virtual experience allows a superthreshold option to be chosen before specific motoric actions are planned. The chosen option can best meet criteria held in working memory, constrained by conditions of various evaluative circuits, including reward, emotional and cognitive circuits. This process also harnesses synaptic and ultimately atomic level randomness to foster the generation of novel and unforeseeable satisfactions of those criteria. Once criteria are met, executive circuits can alter synaptic weights on other circuits that will implement a planned operation or action.

(9) A new theory of qualia: The paradigmatic case of volitional mental control of behavior is voluntary attentional manipulation of representations in working memory such as the voluntary attentional tracking of one or a few objects among numerous otherwise identical objects. If there is a flock of indistinguishable birds, there is nothing about any individual bird that makes it more salient. But with volitional attention, any bird can be marked and kept track of. This salience is not driven by anything in the stimulus. It is voluntarily imposed on bottom-up information, and can lead to eventual motoric acts, such as shooting or pointing at the tracked bird. This leads to viewing the neural basis of attention and consciousness as not only realized in part in rapid synaptic reweighting, but also in particular patterns of spikes that serve as higher level units that traverse neural circuits and open what I call the ‘NMDA channel of communication.’ Qualia are necessary for volitional mental causation because they are the only informational format available to volitional attentional operations. Actions that follow volitional attentional operations, such as volitional tracking, cannot happen without consciousness. Qualia on this account are a ‘precompiled’ informational format made available to attentional selection and operations by earlier, unconscious information processing.

Conclusion: Assuming indeterminism, it is possible to be a physicalist who adheres to a strong conception of free will. On this view, mental and brain events really can turn out otherwise, yet are not utterly random. Prior neuronally realized information parameterizes what subsequent neuronally realized informational states will pass presently set physical/informational criteria for firing. This does not mean that we are utterly free to choose what we want to want. Some wants and criteria are innate, such as what smells good or bad. However, given a set of such innate parameters, the brain can generate and play out options, then select an option that adequately meets criteria, or generate further options. This process is closely tied to voluntary attentional manipulation in working memory, more commonly thought of as deliberation or imagination. Imagination is where the action is in free will.

11 Dec 05:00

Who Owns the Code of Life?

by Peter W. Huber

Who Owns the Code of Life?

Washington control of genetic data would send us down the road to medical serfdom.

Myriad Genetics spent half a billion dollars developing a database of mutations of the BRCA1 and BRCA2 genes. When proteins coded in the genes (as shown in BRCA1 above) fail to do their genetic repair work properly, cancer risk is increased.
DR. MARK J. WINTER/SCIENCE SOURCE
Myriad Genetics spent half a billion dollars developing a database of mutations of the BRCA1 and BRCA2 genes. When proteins coded in the genes (as shown in BRCA1 above) fail to do their genetic repair work properly, cancer risk is increased.

A vast amount of valuable biochemical know-how is embedded in genes and in the complex biochemical webs that they create and control. Does the fact that nature invented it mean that it’s up for grabs? Lots of people are eager to grab it, Washington’s aspiring managers of our health-care economy now prominent among them.

At issue in the Myriad Genetics case decided by the Supreme Court in June were patent claims involving two genes, BRCA1 and BRCA2. Myriad has spent $500 million analyzing thousands of samples submitted by doctors and patients in search of BRCA mutations that sharply increase a woman’s risk of developing breast or ovarian cancer. Whenever the company found a mutation that it hadn’t seen before, it offered free testing to the patient’s relatives in exchange for information about their cancer history. As a result, Myriad can now predict the likely effects of about 97 percent of the BRCA mutations that it receives for analysis, up from about 60 percent 17 years ago.

In Myriad, all nine justices agreed that merely being the first to isolate a “naturally occurring” gene or other “product of nature” doesn’t entitle you to a patent. That came as no surprise. A year earlier, in Mayo v. Prometheus Labs, the Court had rejected a patent for a way to prescribe appropriate doses of certain drugs by tracking their metabolites in the patient’s blood, reaffirming long-standing rules that patents may not lay claim to a “law of nature,” “natural phenomenon,” or “abstract idea.” Previous rulings by the federal circuit court that decides patent appeals had reached similar conclusions in addressing attempts to patent all drugs that might be designed to control specific biochemical pathways. A description of a biological “mechanism of action” without a description of a new device or method to exploit it in some useful way is merely a “hunting license” for inventions not yet developed. A patent must describe “a complete and final invention”—a drug, for example, together with a description of the disorder that it can cure.

But very often, much of the cost of inventing the patentable cure is incurred working out the non-patentable molecular mechanics of the disease because all innovation in molecular medicine must in some way mimic or mirror molecular mechanisms of action already invented by nature. And spurred by the enormous promise of precisely targeted molecular medicine, a substantial part of our health-care economy is now engaged in working out the molecular mechanics of diseases. Washington is funding genomic research projects. Drug companies are heavily involved, joined by a rapidly growing number of diagnostic service companies. Doctors are gathering reams of new molecular data, patient by patient. Hospitals are mining their records for internal use and for sale to outsiders. Device manufacturers are racing to provide molecular diagnostic capabilities directly to consumers. Private insurance companies are mining the information they receive when claims are filed. And there are many signs that Washington intends to take charge of all of the above, as it tightens its grip on diagnostic devices and tests, what doctors diagnose, which diagnoses insurers cover at what price, and how the information acquired is distributed and used.

The patentability of genes is thus only one piece of a much broader debate about who will own and control the torrents of information that we have recently begun to extract from the most free, fecund, competitive, dynamic, intelligent, and valuable repository of know-how on the planet—life itself. That the private sector is already actively engaged in the extraction and analysis is a promising sign, but getting it fully engaged will require robust intellectual property rights, framed for a unique environment in which every fundamentally new invention must be anchored in a new understanding of some aspect of molecular biology. Individual gene patents are out of the picture now, but other forms of intellectual property already provide some protection. We should reaffirm and expand them. And we should view Washington’s plans to take charge instead for what they are: the most ambitious attempt to control the flow of information that the world has ever seen.

Drug companies began systematically exploring the molecular mechanics of diseases more than 30 years ago. Sometimes a gene itself is the essence of the cure. The insulin used by diabetics was extracted from pigs and cows until Genentech and Eli Lilly inserted the human insulin gene into a bacterium and brought “humulin” to market in 1982. Other therapies use viruses to insert into the patient’s cells a gene that codes for a healthy form of a flawed or missing protein. Recent “cancer immunotherapy” trials have shown great promise: the patient is treated with his or her own immune-system cells, genetically modified to induce them to attack their cancerous siblings. More often, a drug is designed to target a protein associated with a specific gene. “Structure-based” drug design and the biochemical wizardry used to produce monoclonal antibodies allow biochemists to craft molecules precisely matched to a target protein that plays a key role in, say, replicating HIV or a cancer cell. An FDA official recently estimated that 10 percent to 50 percent of drugs in pharmaceutical companies’ pipelines involve targeted therapies, and about one-third of new drugs approved by the FDA last year included genetic patient-selection criteria.

The development and use of all such therapies hinge, however, on understanding the roles that genes and proteins play in causing medically significant clinical effects. And as Myriad’s huge database illustrates, what looks like a single disorder to the clinician can often be caused by many different variations in genes that may interact in complex ways and that sometimes change on the fly, as they do in fast-mutating cancer cells or viruses like HIV. The molecular-to-clinical links are still more complex when drugs are added to modulate one or more of the patient-side molecules and unintended side effects enter the picture.

The databases and sophisticated analytical engines that must be developed to unravel these causal connections usually go far beyond “abstract ideas” or what any scientist would call a “law of nature.” To guide the prescription of HIV drug cocktails, Europe’s EuResist Network draws on data from tens of thousands of patients involving more than 100,000 treatment regimens associated with more than a million records of viral genetic sequences, viral loads, and white blood cell counts. Oncologists now speak of treatment “algorithms”—sets of rules for selecting and combining multiple drugs in cocktails that must often be adjusted during the course of treatment, as mutating cancer cells become resistant to some drugs and susceptible to others. IBM recently announced the arrival of a system to guide the prescription of cancer drugs, developed in partnership with WellPoint and Memorial Sloan-Kettering and powered by the supercomputer that won the engine-versus-experts challenge on Jeopardy. It has the power to sift through more than a million patient records representing decades of cancer-treatment history, and it will continue to add records and learn on the job.

Other companies are vying to combine efficient DNA-sequencing systems with interpretive engines that can analyze large numbers of genes and the clinical data needed to reveal their medical implications at prices that rival what Myriad is charging to analyze just two genes. Last year, 23andMe, a company founded to provide consumer genetic-sequencing services, announced that it would let other providers develop applications that would interact with data entrusted to 23andMe by its customers. Hundreds soon did. Their interests, Wired reported, included “integrating genetic data with electronic health records for studies at major research centers and . . . building consumer-health applications focused on diet, nutrition and sleep.” For individuals, 23andMe’s platform will, in the words of the company’s director of engineering, serve as “an operating system for your genome, a way that you can authorize what happens with your genome online.” Numerous websites are already coordinating ad hoc “crowd-sourced” studies of how patients respond to treatments for various diseases. The not-for-profit Cancer Commons is pursuing an “open science initiative linking cancer patients, physicians, and scientists in rapid learning communities.”

How much protection, if any, these databases, online services, and analytical engines will receive from our intellectual property laws remains to be seen. Indeed, where Myriad leaves the thousands of gene patents already granted is unclear—it will take years of further litigation to find out. Genes themselves or their biochemical logic are routinely incorporated into a wide variety of medical diagnostic tests and therapies; other sectors of the economy use genetically engineered cells, plants, and live organisms. Most such applications of genetic know-how will probably remain patentable because the constituent parts of an innovative product or process need not be patentable in themselves. Innovative methods for sequencing genes or analyzing their clinical implications will also remain patentable. What does seem clear is that in Myriad, the Court went out of its way to reaffirm and even broaden the scope of its earlier Mayo decision: a claim involving nothing more than the mechanistic science or an empirical correlation that links molecular causes to clinical effects isn’t patentable.

From the innovator’s perspective, however, patents that cover biological know-how only insofar as it is incorporated into an innovative drug or a diagnostic device provide little, if any, practical protection for what is often a large component of the ingenuity and cost of the invention. The successful development of a pioneering drug reveals key information about the molecular mechanics of a disease and a good strategy for controlling it. Armed with that knowledge, competitors can then modify the drug’s chemistry just enough to dodge the pioneer’s patent and rush in with slightly different drugs developed at much lower cost. In the end, the pioneer can easily be the only player that fails to profit from its own pathbreaking work. A Japanese researcher extracted the first statin from a fungus and discovered that it inhibits an enzyme that plays a key role in cholesterol synthesis; a colleague established its efficacy in limited human trials in the late 1970s. Following that lead, others quickly found or synthesized slightly different statins that worked better and had fewer side effects. Pfizer’s Lipitor, which was licensed two decades later, became the most lucrative drug in history.

“Repurposing” presents the flip side of the same problem. Nature often uses the same or very similar molecules to perform different functions at different points in our bodies. These molecules may then be involved in what used to be viewed as different diseases, and the same drug can then be used to treat them all. But when independent researchers or practicing doctors discover the new use, they usually lack the resources to conduct the clinical trials needed to get the drug’s license amended to cover it and can’t, in any event, market the drug for the new use so long as the patent on its chemistry lasts. And for both independent researchers and drug companies themselves, there is no profit in searching for new uses for an old drug unless there is a profitable market ahead that will cover the cost of the search. The new use may be entitled to what is called a “method” patent, but the patent will easily be dodged if the original patent has expired and cheap generic copies of the drug are readily available for doctors to prescribe.

Similar valuable spillovers occur as drug companies and others catalog the genes that determine how drugs interact with molecular bystanders. Countless patients owe a large debt to those who established that genetic variations in one group of enzymes cause some people to metabolize antidepressants, anticoagulants, and about 30 other types of drugs too quickly, before the drugs have a chance to work, and cause others to metabolize them too slowly, allowing the drugs to accumulate to toxic levels. The discovery of a new drug-modulating gene will often help improve the prescription of other drugs, too. Here again, only a fraction of the value of working out the link between variations in a particular gene and a drug’s performance will be captured by the company that discovers the link and incorporates that information into a first drug’s label.

Innovative diagnostic services are particularly vulnerable to free riders because their sole purpose is to convey information. Soon after it became clear that the Supreme Court would probably invalidate Myriad’s gene patents, two academic researchers teamed up with a gene-testing company and a venture-capital firm to buy clinical records from doctors who have treated patients using reports supplied by Myriad. A website set up on the day that the Supreme Court released its Myriad ruling invites patients to give away the same data instead. Neither scheme will create any new know-how; to the extent that they succeed, both will simply replicate Myriad’s database.

Washington is now hatching plans to free all the biological information that (in Washington’s view) deserves to be free. It’s also trying to make sure that we don’t misunderstand or even bother collecting the information that doesn’t.

To facilitate the development of “a new taxonomy of human diseases based on molecular biology,” a 2011 report commissioned by the National Institutes of Health recommends creation of a broadly accessible “Knowledge Network” that will aggregate data spanning all the molecular, clinical, and environmental factors that can affect our health. Last June, the NIH expressed strong support for a plan to standardize the collection, analysis, and sharing of genomic and clinical data across 41 countries. The 2011 report endorses government-funded pilot programs; the curtailment of at least some existing intellectual property rights (though “guidelines for intellectual property need to be clarified and concerns about loss of intellectual property rights addressed”); and “strong incentives” to promote participation by payers and providers. Indeed, the government may “ultimately need to require participation in such Knowledge Networks for reimbursement of health care expenses.” And data-sharing standards must be framed to discourage “proprietary databases for commercial intent.”

Somewhat paradoxically, the report hastens to add that its proposals extend only to the “pre-competitive” phase of research, presumably to leave some room for intellectual property rights directly tied to diagnostic devices and drugs. But there is no such phase—in pursuit of better products and services, the private sector is already competing to do most everything that the report describes. And a new molecular taxonomy of disease is already emerging: a steadily growing number of yesterday’s diseases now come with prefixes or suffixes that designate some biochemical detail to distinguish different forms—for example, “ER+,” for “estrogen-receptor-positive,” in front of “breast cancer.”

Meanwhile, Washington’s paymasters are busy solidifying their control of much of the data gathering and sharing. The Preventive Services Task Force decides which screening tests must be fully covered for which classes of patients by all private insurance policies, and which should be skipped for either medical or cost reasons. What insurers may charge is regulated, too, so more money spent complying with screening mandates will inevitably mean less spent on disfavored tests and associated treatments. Washington also has ambitious plans to take charge of pooling and analyzing the data that emerge. A new national network, overseen by a national coordinator for health information technology, will funnel medical data from providers and patients to designated scientists, statisticians, and other public and private providers and insurers.

For its part, the FDA has made clear that it intends to maintain tight control of the diagnostic services and devices that enable individuals to read their own molecular scripts. In 2010, Walgreens abruptly canceled plans to sell a test kit called Insight when informed by the FDA that the kit lacked a license. The mail-in saliva-collection kit would have told you what your genes might have to say about dozens of things, among them Alzheimer’s, breast cancer, diabetes, blood disorders, kidney disease, heart attacks, high blood pressure, leukemia, lung cancer, multiple sclerosis, obesity, psoriasis, cystic fibrosis, Tay-Sachs, and going blind—and also how your body might respond to caffeine, cholesterol drugs, blood thinners, and other prescription drugs. Invoking its authority to license every diagnostic “contrivance,” “in vitro agent,” or “other similar or related article,” the FDA announced a crackdown on all companies that attempted to sell such things to the public without the agency’s permission. The agency is determined to protect consumers from what it considers to be “unsupported clinical interpretations” supplied by providers or medically inappropriate responses to diagnostic reports by consumers themselves.

There is much reason to doubt, however, that Washington is qualified to teach the rest of the country how to gather or analyze biological know-how. For the last 50 years, the FDA, by scripting the clinical trials required to get drugs licensed, has controlled how those trials investigate the molecular factors that determine how well different patients respond to the same drug. The trials have, in fact, investigated appallingly little, because the agency still clings to testing protocols developed long before the advent of modern molecular medicine (see “Curing Diversity,” Autumn 2008). A report released in September 2012 by President Obama’s Council of Advisors on Science and Technology (PCAST) urges the FDA to adopt “modern statistical designs” to handle new types of trials that would gather far more information about the molecular biology that controls the development of the disease and its response to drugs. In a speech given last May, Janet Woodcock, head of the FDA’s Center for Drug Evaluation and Research, acknowledged the need to “turn the clinical trial paradigm on its head.”

More generally, Washington has been a persistent (and often inept) laggard in moving its supervisory, transactional, and information-gathering and dissemination operations into the digital age. The FDA’s “incompatible” and “outdated” information-technology systems, the PCAST report notes, are “woefully inadequate.” The agency lacks the “ability to integrate, manage, and analyze data . . . across offices and divisions,” and the processing of a new drug submission may thus involve “significant manual data manipulation.” Other parts of Washington launched plans to push digital technology into the rest of the health-care arena around 2005, encouraged by a RAND Corporation estimate that doing so would save the United States at least $81 billion a year. A follow-up report released early this year concluded, in the words of one of its authors, that “we’ve not achieved the productivity and quality benefits that are unquestionably there for the taking.”

Washington launched the modern era of comprehensive genetic mapping when it started funding the Human Genome Project in the late 1980s. Rarely has the federal government found a better way to spend $3 billion of taxpayer money. But far more mapping has been done since then with private funds, and finishing the job is going to cost trillions, not billions—far more than Washington can pay. In a welcoming environment, Wall Street, venture capitalists, monster drug companies, small biotechs, research hospitals, and many others would be pouring intellect and money into this process. The biosphere offers unlimited opportunity for valuable innovation. The technology is new, fantastically powerful, and constantly improving; the demand for what it can supply is insatiable. But Washington’s heavy-handed control of both the science and the economics of the data acquisition, analysis, and distribution will drive much of the private money out of the market. We should instead give the market the property rights and pricing flexibility that will get the private money fully engaged and leading the way.

As the Supreme Court has recognized, a property right that curtails the use of basic science may foreclose too much innovation by others later on. The discovery of a promising target, for example, shouldn’t be allowed to halt all competitive development of molecules that can detect or modulate it. Instead, we need rights that will simultaneously promote private investment in the expensive process of reverse-engineering nature and the broad distribution of the know-how thus acquired, so as to launch a broad range of follow-up research and innovation and allow front-end costs to be spread efficiently across future beneficiaries.

Models for property rights framed to strike such a balance already exist. As it happens, federal law already grants drug companies a copyright of sorts: a “data exclusivity” right that, for periods up to 12 years, bars the manufacturers of generic drugs from hitching a free ride through the FDA by citing the successful clinical trials already conducted by the pioneer. Data exclusivity rights could easily be extended to cover the development of biological know-how as well, enabling the market to spread the cost of developing such knowledge across the broad base of future beneficiaries. The rights should be narrowly tailored to specific diseases, but they should also last a long time.

The agency that issues data exclusivity rights should also have the authority to oversee a compulsory licensing process analogous to the one that allows singers to perform songs composed by others—or to the government’s “march-in” right to license patents for inventions developed with government funding to “reasonable applicants” if the patent holder has failed to take “effective steps to achieve practical application of the subject invention” or, more broadly, as needed to address the public’s “health and safety needs.” Licensing fees should be modest, and total licensing fees collected might be capped at a level commensurate with the cost of developing the biological know-how and its role in creating new value thereafter.

In addition to mobilizing private capital to develop know-how, well-crafted property rights promote its broad and economically efficient distribution. To begin with, they make the know-how immediately available—though not exploitable for profit—to researchers and competitors. Copyrights require publication of the protected content; patents require a description complete enough to enable others in the field to exploit the “full scope” of the claimed invention. Going forward, Myriad and companies like it will instead treat their discoveries and databases as trade secrets and frame contracts to forbid disclosure of their data by the doctors and patients who use their services.

As entertainment and digital software and hardware markets have also demonstrated, markets protected by the right forms of intellectual property usually find ways to develop tiered pricing schemes that allow widespread distribution of information at economically efficient prices while the rights last. By contrast, current policies in the health-care arena load most of the cost of developing biological know-how onto the shoulders of the patients who buy the pioneering drug or device while its patent lasts. The market will develop more pioneer drugs—and government paymasters will be more willing to accept them—if one important component of the know-how costs can be spread efficiently across many more products, services, and patients.

The emergence of a robust market for molecular and clinical information will also draw an important new competitor into drug markets: the independent researcher or company that develops the biological science needed to select the most promising targets and to design successful clinical trials for new drugs. Drug companies will inevitably continue to play a large role in the search for patient-side information. But for much the same reason that software companies welcome advances in digital hardware, drug companies should welcome the rise of independent companies with expertise in connecting molecular effects to clinical ones in human bodies. Independent companies are permitted to interact much more freely with doctors and patients. And by accelerating the development and wider distribution of information about the molecular roots of health and disease, companies that specialize in predictive molecular diagnosis will help mobilize demand for the expeditious delivery of drugs.

Finally, private markets have established that we do not need a government takeover to maximize the value of information by consolidating it efficiently in large databases. For well over a century, competing companies have been pooling their patents and interconnecting their telegraph, phone, and data networks, online reservation systems, and countless other information conduits and repositories. Aggregators buy up portfolios of patents and licenses for songs, movies, TV shows, and electronic books, and then offer access in many different, economically efficient, ways to all comers. The global financial network depends on instant data sharing among cooperating competitors. Many online commercial services today hinge on the market’s willingness to interconnect and exchange information stored in secure, proprietary databases.

With appropriate property rights in place, the economics of the market for clinical data will, in all likelihood, be largely self-regulating. The accumulation of new data points steadily dilutes the value of the old, though their aggregate value continues to grow. New data can be acquired every time a patient is diagnosed or treated. The best way to accelerate the process that makes information ubiquitously and cheaply available is to begin with economic incentives that reward those who develop new information faster and come up with ways to distribute it widely at economically efficient prices.

Conspicuous by its frequent absence in debates about who owns biological know-how is the patient’s right not to disclose information about his or her innards to anyone—the corollary of which should be a right to give it away, share it, sell it outright, or license it selectively, at whatever price the market will bear. The individual patient won’t often be interested in haggling over what a clinical record might be worth to a hospital or an insurance company. But emphatically reaffirming the private ownership of private information is the first, essential step in creating markets for those who would help collect and analyze the data. It would also help set the stage for some serious constitutional challenges to Washington’s attempts to displace them.

Washington assures us that patient privacy will be protected when it pools medical records for analysis by Washington-approved researchers and statisticians—but its policies reflect the conviction that the patient’s privacy rights will be bought by whoever is paying for the care, whether Washington itself or a private insurance company operating under Washington’s thumb. The Health Insurance Portability and Accountability Act of 1996 does indeed require that records, before they are shared, must be redacted to ensure that they can’t be linked back to individual patients. But Washington’s guidelines on “de-identification” of private health-care data are naively optimistic about what that would require. In a paper published in Science last January, an MIT research team described how easily it had used a genealogy website and publicly available records to extract patients’ identities from ostensibly anonymous DNA data. Genetic profiles reveal gender, race, family connections, ethnic identity, and facial features; they correlate quite well with surnames. DNA fingerprinting following an arrest is now routine, and a close match with a relative’s DNA can provide an excellent lead to your identity. Further, state-run clinics and hospitals are exempt from the federal requirements, and some are already selling patient records to outsiders, sometimes neglecting to remove information such as age, zip codes, and admission and discharge dates.

Even if privacy were fully protected, a promise to “de-identify” data doesn’t give the government the right to seize and distribute private information in the first place. Washington, it appears, now aspires to create what is, for all practical purposes, a federal mandate to participate in an insurance system in which Washington minutely regulates which diagnostic tests are administered and which treatments are provided—and then requires the sharing of the data acquired in the course of treatment. It seems unlikely that such a system can withstand constitutional scrutiny. A government grab for information, whether direct or through private intermediaries, is subject to the Fourth Amendment, which places strict limits on such invasions of our privacy.

At the opposite pole, the FDA’s decision to protect the masses from unapproved clinical interpretations of molecular data, or to limit or bar access to the information on the ground that the masses aren’t smart enough to handle it wisely, also looks constitutionally vulnerable. Home-use diagnostic devices and services offer patients the absolute medical privacy that they are entitled to. And the Supreme Court has repeatedly concluded that freedom of speech includes a private right to listen, read, and study that is even broader than the right to speak, write, and teach. Enabling tools and technology—the printing press and ink that produce the newspaper, for example—enjoy the same constitutional protection. Lower courts have recently also affirmed the First Amendment right to discuss off-label drug prescriptions. A constitutional right to engage in officially unapproved discussion about what a drug might do to a patient’s body must surely also cover unapproved discussion about what the patient’s own genes and proteins might do.

If we let them, markets for biological know-how will do exactly what markets do best and deliver what molecular medicine most needs. Propelled as they are by dispersed initiative and private choice, free markets are uniquely good at extracting and synthesizing information that’s widely dispersed among innovators, investors, workers, and customers—“personal knowledge,” in the words of Michael Polanyi, the brilliant Hungarian-British chemist, economist, and philosopher. Nowhere could the free market’s information-extracting genius be more important than in a market for products whose value depends on their ability to mirror biochemical information inside the people who use them.

The Supreme Court’s Myriad ruling is a reasonable construction of the patent law as currently written. But that we now find ourselves pondering whether Washington should be taking full control of the private genetic and clinical data that individuals supply—and that private companies like Myriad are eager to collect and analyze—provides a chilling reminder of how rapidly we are slouching down the road to medical serfdom.

Peter W. Huber is a Manhattan Institute senior fellow and the author of the forthcoming The Cure in the Code: How 20th Century Law Is Undermining 21st Century Medicine.

18 Nov 21:08

A comparison of public and private schools in Spain using robust nonparametric frontier methods

by Cordero, José Manuel, Prior, Diego, Simancas Rodríguez, Rosa
This paper uses an innovative approach to evaluate educational performance of Spanish students in PISA 2009. Our purpose is to decompose their overall inefficiency between different components with a special focus on studying the differences between public and state subsidized private schools. We use a technique inspired by the non-parametric Free Disposal Hull (FDH) and the application of robust order-m models, which allow us to mitigate the influence of outliers and the curse of dimensionality. Subsequently, we adopt a metafrontier framework to assess each student relative to the own group best practice frontier (students in the same school) and to different frontiers constructed from the best practices of different types of schools. The results show that state-subsidised private schools outperform public schools, although the differences between them are significantly reduced once we control for the type of students enrolled in both type of centres.
18 Nov 21:08

Applying the Capability Approach to the French Education System: An Assessment of the "Pourquoi pas moi ?"

by André, Kévin
This paper attempts to re-examine the notion of equality, going beyond the classic opposition in France between affirmative action and meritocratic equality. Hence, we propose shifting the French debate about equality of opportunities in education to the question of how to raise equality of capability. In this paper we propose an assessment based on the capability approach of a mentoring programme called 'Une grande école: pourquoi pas moi?' ('A top-level university: why not me?' (PQPM) launched in 2002 by a top French business school. The assessment of PQPM is based on the pairing of longitudinal data available for 324 PQPM students with national data. Results show that the 'adaptive preferences' of the PQPM students change through a process of empowerment. Students adopt new 'elitist' curricula but feel free to follow alternative paths.
22 Oct 00:00

Political Learning and Officials’ Motivations: An Empirical Analysis of the Education Reform in the State of São Paulo

by Thomaz M. F. Gemignani, Ricardo de Abreu Madeira
We investigate the occurence of social learning among government officials in a context of decentralization of political responsibilities - the schooling decentralization reform of the state of São Paulo - and use it to analyze officials' motivations driving the adhesion to that program. We explore how the information exchange about the newly adopted tasks is configured and which aspects about the returns of decentralization are mostly valued by officials in their learning process. In particular, we try to determine to what extent the adhesion to the reform was due to electoral motivations or, rather, to concerns about the quality of public education provision. We present evidence that social learning configures a relevant factor in the reform implementation and find that mayors are more likely to adhere to the program upon the receipt of good news about the electoral returns of decentralization. On the other hand, experiences by information neighbors that turn out to be successful in improving the public provision seem to be ignored in mayors' decisions for decentralization. The argument for electoral motivations is further supported by evidence that officials tend to be more responsive to information transmitted by neighbors affiliated to the same party as their own.
28 Oct 22:36

Do More Educated Leaders Raise Citizens' Education?

by Diaz-Serrano, Luis, Pérez, Jessica
This paper looks at the contribution of political leaders to enhance citizens' education and investigate how the educational attainment of the population is affected while a leader with higher education remains in office. For this purpose, we consider educational transitions of political leaders in office and find that the educational attainment of population increases when a more educated leader remains in office. Furthermore, we also observe that the educational attainment of the population is negatively impacted when a country transitions from an educated leader to a less educated one. This result may help to explain the previous finding that more educated political leaders favor economic growth.
06 Oct 21:53

Memo to Ed Miliband: My Marxist father was wrong, too

by Steve

The Daily Mail is involved in a dispute with Ed Miliband over a recent article in the paper that accused Miliband’s father Ralph, a Marxist academic, of hating Britain. Miliband says he disagrees with the views of his father, who died in 1994, but says his father loved Britain. Dalrymple has written a short but powerful article for the Telegraph that reminds readers of the troubling emotional and moral foundations of Marxists like Ralph Miliband. I should know, Dalrymple says, because my father was Marxist too:

I saw that his concern for the fate of humanity in general was inconsistent with his contempt for the actual people by whom he was surrounded, and his inability to support relations of equality with others. I concluded that the humanitarian protestations of Marxists were a mask for an urge to domination.

In addition to the emotional dishonesty of Marxism, I was impressed by its limitless resources of intellectual dishonesty. Having grown up with the Little Lenin Library and (God help us!) the Little Stalin Library, I quickly grasped that the dialectic could prove anything you wanted it to prove, for example, that killing whole categories of people was a requirement of elementary decency.

01 Oct 21:26

Bayesian Orgulity

Gordon Belot
Philosophy of Science, Volume 80, Issue 4, Page 483-503, October 2013.
23 Sep 17:56

Epigenetics smackdown at the Guardian

by whyevolutionistrue

Well, since the tussle about epigenetics involves Brits, they’re really too polite to engage in a “smackdown.” Let’s just call it a “kerfuffle.” Nevertheless, two scientists have an enlightening 25-minute discussion about epigenetics at the Guardian‘s weekly science podcast (click the link and listen from 24:30 to 49:10). If you’re science friendly and have an interest in this ‘controversy,’ by all means listen in. It’s a good debate about whether “Lamarckian” inheritance threatens to overturn the modern theory of evolution.

Readers know how I feel about the epigenetics “controversy.” “Epigenetics” was once a term used simply to mean “development,” that is, how the genes expressed themselves in a way that could construct an organism. More recently, the term has taken on the meaning of “environmental modifications of DNA,” usually involving methylation of DNA bases.  And that is important in development, too, for such methylation is critical in determining how genes work, as well as in how genes are differentially expressed when they come from the mother versus the father.

But epigenetics has now been suggested to show that neo-Darwinism is wrong: that environmental modifications of the DNA—I’m not referring to methylation that is actually itself coded in the DNA—can be passed on for many generations, forming a type of “Lamarckian” inheritance that has long been thought impossible.  I’ve discussed this claim in detail and have tried to show that environmentally-induced modifications of DNA are inevitably eroded away within one or a few generations, and therefore cannot form a stable basis for evolutionary adaptation.  Further, we have no evidence of any adaptations that are based on modifications of the DNA originally produced by the environment.

In the Guardian show, the “Coyne-ian” position is taken by Dr. George Davey Smith, a clinical epidemiologist at the University of Bristol.  The “epigenetics-will-revise-our-view-of-evolution” side is taken by Dr. Tim Spector, a genetic epidemiologist at King’s College. Smith makes many of the points that I’ve tried to make over the past few years, and I hope it’s not too self-aggrandizing to say that I think he gets the best of Spector, who can defend the position only that epigenetic modification is important within one generation (e.g., cancer) or at most between just two generations.

But listen for yourself. These guys are more up on the literature than I am, and I was glad to see that, given Smith’s unrebutted arguments, neo-Darwinism is still not in serious danger. (I have to say, though, that I’d like to think that if we found stable and environmentally induced inheritance that could cause adaptive changes in the genome, I’d be the first to admit it.)

h/t: Tony


25 Aug 10:46

"The People Want the Fall of the Regime": Schooling, Political Protest, and the Economy

by Filipe Campante
We provide evidence that economic circumstances are a key intermediating variable for understanding the relationship between schooling and political protest. Using the World Values Survey data, we find that individuals with higher levels of schooling, but whose income outcomes fall short of that predicted by a comprehensive set of their observable characteristics, in turn display a greater propensity to engage in protest activities. We argue that this evidence is consistent with the idea that a decrease in the opportunity cost of the use of human capital in labor markets encourages its use in political activities instead, and is unlikely to be explained solely by either a pure grievance effect or by self-selection. We then show separate evidence that these forces appear to matter too at the country level: Rising education levels coupled with macroeconomic weakness are associated with increased incumbent turnover, as well as subsequent pressures toward democratization.
04 Aug 11:38

Quotation of the Day…

by Don Boudreaux

… is from Parker T. Moon’s 1928 volume, Imperialism and World Politics; it’s a passage quoted often by Tom Palmer – for example, on page 418 of Tom’s 1996 essay “Myths of Individualism,” which is reprinted in Toward Liberty (David Boaz, ed., 2002):

Language often obscures truth.  More than is ordinarily realized, our eyes are blinded to the facts of international relations by tricks of the tongue.  When one uses the simple monosyllable “France” one thinks of France as a unit, an entity.  When to avoid awkward repetition we use a personal pronoun in referring to a country – when for example we say “France sent her troops to conquer Tunis” – we impute not only unity but personality to the country.  The very words conceal the facts and make international relations a glamorous drama in which personalized nations are the actors, and all too easily we forget the flesh-and-blood men and women who are the true actors.  How different it would be if we had no such word as “France,” and had to say instead – thirty-eight million men, women and children of very diversified interests and beliefs, inhabiting 218,000 square miles of territory!  Then we should more accurately describe the Tunis expedition in some such way as this: “A few of these thirty-eight million persons sent thirty thousand others to conquer Tunis.”  This way of putting the fact immediately suggests a question, or rather a series of questions.  Who are the “few”?  Why did they send the thirty thousand to Tunis?  And why did these obey?

Moon’s point applies also, of course, to trade.  When we say, for example, that “America trades with China” we too easily overlook the fact that what’s going on is nothing other than some number of flesh-and-blood individuals living in, or citizens of, a geo-political region that we today conventionally call “America” voluntarily buy and sell with a number of flesh-and-blood individuals living in, or citizens of, a geo-political region that we today conventionally call “China.”

Nothing - nothing at all - about such exchanges differs in any economically relevant way from exchanges that take place exclusively among Americans or from exchanges that take place exclusively among the Chinese.  Any downsides or upsides that you might identify as a result of Americans trading with the Chinese exist when Americans trade with Americans or when the Chinese trade with the Chinese.  Yet personifying the collective masks this reality that all exchange is carried out by flesh-and-blood individuals, and that nothing about having those exchanges occur across man-drawn political boundaries is economically relevant.

02 Aug 11:00

More On Detroit …

by admin

A few thoughts.

  1. Amazingly, there some who believe that Detroit’s problem were caused by too-little govt., or least something other than what it was
  2. I think even the hard-core leftists at the federal level understand the moral-hazard disaster that would accompany a federal bailout of Detroit
  3. I wish people wouldn’t refer to leftist/statists as “liberals”

Will’s column is excellent.

George F. Will: Detroit’s death by democracy – The Washington Post: uto industry executives, who often were invertebrate mediocrities, continually bought labor peace by mortgaging their companies’ futures in surrenders to union demands. Then city officials gave their employees — who have 47 unions, including one for crossing guards — pay scales comparable to those of autoworkers. Thus did private-sector decadence drive public-sector dysfunction — government negotiating with government-employees’ unions that are government organized as an interest group to lobby itself to do what it wants to do: Grow.

Related links:
A Look Inside Detroit, Bus Edition …

23 Jul 10:00

Special Rules For The Rulers, Alcohol Edition

by admin

All animals are equal, but some animals are more equal than others.” — George Orwell

LCBO’s new ‘simplified pricing formula’ gives diplomats, federal government 49% discount on booze | National Post: OTTAWA — Ontario’s liquor board has sweetened an already sweet deal for the federal government and foreign diplomats as it chops the prices they pay for beer, wine and booze almost in half. Late last month, the Liquor Control Board of Ontario began offering its products to federal departments and agencies at a 49% discount from the retail price that everyone else pays. The cut rate on alcoholic beverages is also available to foreign embassies, high commissions, consulates and trade missions, most of whom are located in the Ottawa area, within the LCBO’s jurisdiction. And the favoured buyers will be exempt from an LCBO policy dating from 2001 that sets minimum prices for products, under its “social responsibility” mandate.

15 Jul 14:49

My Brain Made Me Do It? (More on X-phi and bypassing)

by Eddy Nahmias

I’d already written up this post, and it raises some of the issues that are being discussed in the previous post’s thread, so I figured I’d post it now and then respond to comments and critiques to both posts as the week progresses.  (Plus we’re putting up follow-up experiments this week, so if anyone has helpful criticisms, we might try to address them.)

So, here’s a summary of my latest x-phi results on what people say about the possibility of perfect prediction based on neural activity.  The goal was not to address traditional philosophical debates (head on) but to pose challenges to a claim often made by “Willusionists” (people who claim that modern mind sciences, such as neuroscience, show that free will is an illusion).  Willusionist arguments typically have a suppressed (definitional) premise that says “Free will requires that not X” and then they proceed to argue that science shows that X is true for humans.  (In this forthcoming chapter I argue that Willusionists are often unclear about what they take X to be—determinism, physicalism, epiphenomenalism, rationalization—and that the evidence they present is not sufficient to demonstrate X for any X that should be taken to threaten free will.)

The definitional premise in these arguments is usually asserted based on Willusionists’ assumptions about what ordinary people believe (or sometimes their interpretation of an assumed consensus among philosophers).  I think their assumptions are typically wrong.  For instance, Sam Harris in his book Free Will argues that ordinary people would see that neuroscience threatens free will once they recognized that it allows in principle perfect prediction of decisions and behavior based on neural activity, even before people are aware of their decisions, and he gives a detailed description of such a scenario (on pp. 10-11).   

With former students Jason Shepard (now at Emory psych), Shane Reuter (now at WashU PNP), and Morgan Thompson (going to Pitt HPS), we tested Harris’ prediction.  I will provide the complete scenario we used in the first comment below.  

The basic idea was to develop Harris’ scenario in full detail.  We explain the possibility of a neuroimaging cap that would provide neuroscientists information about a person’s brain activity sufficient to predict with 100% accuracy everything the person will think and decide before she is aware of thinking or deciding it.  They do this while Jill wears the cap for a month (of course they predict what she’ll do even when she tries to trick them).  Along with everything else she decides or does, the neuroscientists predict how she will vote for Governor and President.  (In one version the device also allows the neuroscientists to alter Jill’s brain activity and they change her vote for Governor, but not President, without her awareness of it.) 

On the one hand, our scenario does not suggest that people’s (conscious) mental activity is bypassed (or causally irrelevant) to what they decide and do.  On the other hand, it is difficult to see how to interpret it such that it allows a causal role for a non-physical mind or soul (or perhaps for Kane-style indeterminism in the brain at torn decisions or agent-causal powers).  The scenario concludes: “Indeed, these experiments confirm that all human mental activity is entirely based on brain activity such that everything that any human thinks or does could be predicted ahead of time based on their earlier brain activity.”  (Jason, Shane, and I plan to do follow up experiments with scenarios that more explicitly rule out dualism or non-reductionism and ones that are explicitly dualist, and also use ones that include moral decisions.  See first comment.)

Some highlights of our results (using 278 GSU undergrads, though I’ve also run it on some middle school students for a Philosophy Outreach class and their responses follow the same patterns; we may try it on an mTurk sample too):

80% agree that “It is possible this technology could exist in the future”.  This surprised me since there are so many reasons one might think it couldn’t be developed.  Of the 20% who disagreed, only a handful mentioned the mind or soul or free will; instead, most mentioned mundane things like people not allowing it to be developed or technological difficulties.  Responses to this possibility question didn’t correlate with responses to those described below.

The vast majority (typically 75-90%) responded that this scenario does not conflict with free will or responsibility regarding (a) Jill’s non-manipulated vote, (b) regarding Jill’s actions in general while wearing the scanner, or (c) regarding people if this technology actually existed, except this one was lower in the scenario where neuroscientists could manipulate.  Mean scores were typically around 6 on a 7-point scale, though they were lower for the same statements in the case where the neuroscientists could manipulate decisions but didn’t.  Means were similar for questions about having choices, making choices, and deserving blame.

Responses were markedly different for the decisions that were in fact manipulated by the neuroscientist (in the 2.3 range).  This is not surprising but at least it shows people are paying attention (questions were intermixed), and that they are not just offering ‘free will no matter what’ judgments in the other cases.

Responses to “bypassing “questions partially mediated responses to free will and responsibility questions.  That is, the degree to which people (dis)agreed with statements like, “Jill’s reasons had no effect on how she voted for Governor” or “If this technology existed, then people’s reasons would have no effect on what they did” partially explained their responses to statements about free will and responsibility for Jill or for people.

We also asked whether people agreed with this statement: “If this technology existed, it would show that each decision a person makes is caused by particular brain states.”  Even though the scenario does not explicitly discuss neural causation, 2/3 agreed and responses to this statement did not correlate with responses to statements about free will and responsibility. 

Now, we think these results falsify the armchair prediction made by Harris (and suggested by other Willusionists) about what ordinary people believe about free will.  And I also think they provide support for the idea that most people are amenable to a naturalistic understanding of the mind and free will, or better, that they have what I call a ‘theory-lite’ understanding (with Morgan, I develop this idea in this paper, which is a companion piece to this paper by Joshua Knobe—feel free to broaden the discussion to these papers if you look at them, and I’ll have to say more about the 'theory-lite' view in response to Thomas’ questions).

But rather than further describing our interpretations of these results (and some potential objections to them and data problematic for them, such as the data presented by Thomas in comments), which I will do in the comments, let me start by asking y’all what you think these results show, if anything, about people’s beliefs, theories, and intuitions about free will, responsibility, the mind-body relation, etc.  And to the extent that you think they don’t show much, why is that?  Are there variations of the scenarios (or questions) that you think would help us show more? (Feel free to throw in critiques of x-phi along the way!)

13 May 16:04

Su Shi hopes his son is stupid

by Sedulia

Screen shot 2013-05-13 at 18.03.05

Families, when a child is born, want it to be intelligent.
I, through intelligence
having wrecked my whole life,
only hope the baby will prove
ignorant and stupid.
Then he will crown a tranquil life
by becoming a cabinet minister.

   --Chinese poet 蘇軾 Su Shi (1037-1101), also known as Su Dongpo, on the birth of his son. Translation by Arthur Waley (1889-1966).

人皆生子望聰明
我被聰明誤一 生
但原吾兒魯且愚
無災無難到公卿。