Shared posts

12 Oct 02:11

Where are the Women (in Economics)?

by Mark Thoma

"It’s something systemic to the field"

Where are the Women?, by Jesse Romero, FRB Richmond: Women earned 34 percent of economics Ph.D.s in 2011... That might sound like a lot, but it’s much lower than the 46 percent of all doctorate degrees earned by women, and the smallest share among any of the social sciences. ...
The gender gap in economics gets larger at each stage of the profession, a phenomenon described as the “leaky pipeline.” In 2012, women were 28 percent of assistant professors, the first rung on the academic ladder; 22 percent of associate professors with tenure; and less than 12 percent of full professors...
In part, this might reflect the long lag between earning a Ph.D. and attaining the rank of full professor; if more women are entering the field today than 20 years ago, more women might be full professors in the future. But the share of new female Ph.D. students is actually lower than it was in 1997,... which means women’s share of economics faculty could actually shrink.
Donna Ginther of the University of Kansa s and Shulamit Kahn of Boston University also found leaks in the pipeline. In several studies, they have shown that women are less likely than men to progress at every stage of an academic career... Furthermore, women are less likely to be promoted in economics than in other social sciences, and even than in more traditionally male fields such as engineering and the physical sciences.
In part, the disparity between men and women could be due to different choices, such as having children or focusing more on teaching than on research. ...
But even after controlling for education, ability, productivity, and family choices, Ginther and Kahn found that a gap of about 16 percentage points persists in the likelihood of promotion to full professor in economics — a much larger gap than in other disciplines. “It’s something systemic to the field,” says economist Claudia Goldin of Harvard University.
Whatever that something is, ... the problem might be the way economics is taught, Goldin says. “... We’re teaching economics the same way we did when women didn’t matter. But now women do matter. So how do we translate economics into ‘girlish’?” ...
Does it actually matter how many female economists there are? Yes, says Susan Athey of Stanford University. “You just don’t get the best al location of human capital” when one category of people is excluded. “Losing out on a chunk of the population is wasteful.” (In 2007, Athey was the first woman to receive the John Bates Clark medal, given to the American economist under 40 who has made the greatest contribution to the field.) In addition, a survey by Ann Mari May and Mary McGarvey of the University of Nebraska-Lincoln and Robert Whaples of Wake Forest University found that male and female economists have significantly different opinions on public policy questions such as the minimum wage, labor regulations, and health insurance. As the authors concluded, “Gender diversity in policymaking circles may be an important aspect in broadening the menu of public policy choices.” ...
11 Oct 04:42

How The Power Of Ocean Waves Could Yield Freshwater With Zero Carbon Emissions

by Jeff Spross
ocean-wave

CREDIT: Shutterstock

A new project in Australia aims to create freshwater by harnessing the kinetic force of ocean waves, RenewEconomy reports. Run by the Perth-based firm Carnegie Wave Energy in cooperation with the Water Corporation, and supported by a $1.27 million grant from the Australian Federal Government’s AusIndustry Clean Technology Innovation Program, the plant will use Carnegie’s proprietary CETO wave energy technology to power reverse osmosis desalination. The resulting process, free of carbon emissions, “will be a world first” according to CEO Michael Ottaviano.

Reverse osmosis desalination has been in use for several decades, and works simply enough: high pressure is used to force saltwater through a membrane, producing drinkable freshwater on the other end. Traditionally the pressure is provided with electric pumps powered by fossil fuels, resulting in both carbon dioxide emissions and lots of points for energy loss.

But instead of relying on those electric pumps, Carnegie is using the latest iteration of its CETO technology — CETO 5 — to supply that pressure with wave energy instead. Underwater buoys eleven meters in diameter are installed offshore, and as ocean waves catch them, the movement supplies hydraulic power to pump seawater up underground pipes to shore. At that point, the water runs into the desalination plant, where it directly supplies the pressure for the reverse osmosis. Some of that hydraulic energy is also converted into electric power as needed.

CETO_wave_desalination

CREDIT: Carnegie Wave Energy

The resulting system not only cuts out all carbon dioxide emissions, it also greatly reduces the points where energy can be lost, making the process much more energy efficient and cost-effective.

The two megawatt demonstration project will be situated on Garden Island, near the coastal city of Perth in Western Australia, and will ultimately supply roughly 55 billion litres of drinking water per year. A previous desalination plant set up by Water Corporation in Kwinana, south of Perth, already supplies 45 billion litres. The final total of 100 billion litres a year is half the city’s drinking water needs.

Southwestern Australia has been especially hard hit by droughts, and unaffected by the reprieve from the dry period the rest of the continent has enjoyed. And climate change models project that the traditional freshwater supplies for Perth will dry up even further by 2030. Meanwhile, Australia as a whole has been suffering the ravages of climate change, what with record-setting heat waves, floods, and other extreme weather. So anything that could provide the country freshwater without adding anymore to the globe’s carbon emissions is a welcome development.

HT: RenewEconomy


    






11 Oct 04:17

Congressman Says It’s ‘Arrogant’ To Believe In Man-Made Climate Change

by Annie-Rose Strasser
Indiana Rep. Todd Rokita (R)

Indiana Rep. Todd Rokita (R)

At a town hall event in Indiana on Monday, Rep. Todd Rokita (R-IN) said that humans are not responsible for climate change, and that it is “arrogant” to think they could be.

Rokita, who has previously questioned man-made climate change, saying that it is “under debate,” took a more definitive stand against it during his Monday town hall:

I think it’s arrogant that we think as people that we can somehow change the climate of the whole earth when science is telling us that there’s a cycle to all this,” he said. “And that cycle was occurring before the industrial revolution and I suspect will occur way into the future.”

Rokita is part of a vocal caucus of climate deniers in Congress, many of whom are funded by oil and coal money. Rokita himself has recieved a total $52,200 — $30,100 from oil and gas, and $22,100 from coal, according to Open Secrets.

By training, Rokita is a lawyer, so he is not actually qualified to scientifically assess the earth’s warming and cooling patterns. Of the climate scientists who have made a career of doing just that, 97 percent agree that climate change is real, and that humans are contributing to it.

(HT: Huffington Post)

The post Congressman Says It’s ‘Arrogant’ To Believe In Man-Made Climate Change appeared first on ThinkProgress.


    






30 Aug 05:12

Got Culture?

by noreply@blogger.com (Norbert)
Hima2303

"The key is syntax!"


In the last chapter of Dehaene’s Reading the Brain he speculates about one of the really big human questions: whence culture? The books big thesis, concentrating on reading and writing as vehicles for cultural transmission, is the Neuronal Recycling Thesis (NRT). The idea is simple; culture supervenes on neuronal mechanisms that arose to serve other ends. Think exaptation as applied to culture.  Thus, reading and writing are underpinned by proto letters, which themselves live on ecologically natural patterns useful for object recognition.  So too, the hope goes, for the rest of what we think of as culture. However, as Dehaene quickly notes, if this is the source, and “we share most, if not all of these processors [i.e. recycled structures NH] with other primates, why are we the only species to have generated immense and well-developed cultures” (loc 4999). Dehaene has little patience for those who fail to see a qualitative difference between human cultural achievements and those of our ape cousins.


…the scarcity of animal cultures and the paucity of their contents stand in sharp contrast to the immense list of cultural traditions that even the smallest human groups develop spontaneously. (loc 4999)


Dehaene specifically points to the absence of “graphic invention” in primates as “not due to any trivial visual or motor limitation” or to a lack of interest in drawing, apparently (loc 5020). He puts the problem nicely:


If cultural invention stems from the recycling of brain mechanisms that humans share with other primates, the immense discrepancy between the cultural skills of human beings and chimpanzees needs to be explained. (loc 5020)


He also surveys several putative answers, and finds them wanting. His remarks on Tomasello (loc 5046-5067) seem to me quite correct, noting that though Tomasello’s mind reading account might explain how culture might spread and its achievements retained cross generationally:[1]


…it says little…about the initial spark that triggers cultural invention. No doubt the human species is particularly gifted at spreading culture – but it is also the only species to create culture in the first place. (loc 5067, his emphasis)


So what’s Dehaene’s proposal?


My own view is that another singular change was needed - the capacity to arrive at new combinations of ideas and the elaboration of a conscious mental synthesis (loc 5067).


This is quite a mouthful, and so far as I can see, what Dehaene means by this is that our frontal lobe got bigger and that this provided a “”neuronal workspace” whose main function is to assemble, confront, recombine, and synthesize knowledge” (loc 5089).


I don’t find this particularly enlightening. It’s neuro-speak for something happened, relevant somethings always involving the brain (wouldn’t it be refreshing if every once in a while the kidney, liver or heart were implicated!). In other words, the brain got bigger and we got culture. Hmm. This might be a bit unfair. Dehaene does say more.


He notes that the primate cortex, in contrast to ours, is largely modular, with “its own specific inputs, internal structure, and outputs.” Our prefrontal cortex in contrast “emit and receive much more diverse cortical signals” and so “tend to be less specialized.” In addition, the our brains are less “modular” and have greater “bandwidth.” This works to prevent “the division of data and allows out behavior to be guided by any combination of information from past or present experience.” (loc 5089)


Broken down to its essentials, Dehaene is here identifying the demodularization of thought as the key ingredient to the emergence of culture. As he notes (loc 5168), in this he agrees with Liz Spelke (and others) who has argued that the general ability to integrate information across modules is what spices up our thinking beyond what we find in other primates.  Interestingly for my purposes here, Spelke ties this capacity for cross module integration to the development of linguistic facility (see here).


This assumption, that language is a necessary condition for the emergence of the kind of culture we see in humans is consistent with the hypothesis Minimalists have been assuming (following people like Tatersall (here)) that the anthropological “big bang,” which occurred in the last 25-50,000 years, piggy backed on the emergence of FL in the last 50-100,000 years. Moreover, it’s language as module buster that gets the whole amazing culture show on the road.


But what features of language make it a module buster?  What allows grammar to “assemble and recombine” otherwise modular information? What’s the secret linguistic sauce?


Sadly, neither Dehaene nor Spelke say.  Which is too bad as me and my lunch buddies (thx Paul, Bill) have discussed this question off and on for several years now, without a lot to show for it. However, let me try to suggest a key characteristic that we (aka I) believe is implicated. The key is syntax!


The idea is that FL provides a general-purpose syntax for combining information trapped within modules.  Syntax is key here, for I am assuming (almost certainly wrongly, so feel free to jump in at any point) what makes information modular is some feature of the module internal representations that make it difficult for them to “combine” with extra-modular information. I say syntax for once information trapped within a module can combine with information in another module it appears that, more often than not, the combination can be interpreted. Thus, it’s not that the combination of modularly segregated concepts is semantically undigestible, rather the problem seems to be getting the concepts to talk to one another in the first place, and, I take this to mean, to syntactically combine. So module busting will amount of figuring out how to treat otherwise distinct expressions in the same way. We need some kind of abstract feature that, when attached to an arbitrary expression, allows it to combine with any other expression from any other module.  What we need, in effect, is, what Chomsky called, an “edge-feature,”  (EF) a thingamajig that allows expressions to freely combine.


Now, if you are like me, you will not find this proposal a big step forward for it seems to more name a solution than provide one. After all, what can EFs be such that they possess such powers?  I am not sure, but I am pretty confident that whatever this power is it’s purely syntactic. It is an intrinsic property of lexical atoms and it is an inherited property of congeries of such (i.e. outputs of Merge).  I have suggested (here) that EFs are, in fact, labels, which function to close Merge in the domain of the lexical items (LIs). In the same place I proposed that labeling is the distinctively linguistic operation, which in concert with other cognitively recycled operations, allowed for the emergence of FL.


How might labels do this?  Good question. An answer will require addressing a more basic question: what are labels?  We know what they must do: they must license the combination both of lexical atoms and complexes of such.  Atomic LIs are labels.  Complexes of LIs are labeled in virtue of containing atomic ones. The $64,000 question (doesn’t sound like much of a prize anymore, does it?) is how to characterize this.  Stay tuned.


So, culture supervenes on language and language is the recycling of more primitive cognitive operations spiced with a bit of labeling. Need I say that this is a very “personal” (read “extremely idiosyncratic and not currently fashionable”) view?  Current MP accounts are very label-phobic.  However, the question Dehaene raises is a good one, especially for theories like MP that presuppose lots of cognitive recycling.[2]  It’s not one whose detailed answer is anywhere on the horizon. But like all good questions, I suspect that it will have lots of staying power and will provide lots of opportunities for fun conversations.




[1]It’s good to see that Tomasello is capable of begging the interesting question regardless of where he puts his efforts.

[2]See discussion in the comments I had with Jan Koster about this my previous post (here).

30 Aug 04:56

Monster

Hima2303

Another one like the SEP of Douglas Adams'... loved it :)

It was finally destroyed with a nuclear weapon carrying the destructive energy of the Hiroshima bomb.
17 Aug 11:59

Meteor Showers

Remember, meteors always hit the tallest object around.
20 Jul 16:33

Social Media

The social media reaction to this asteroid announcement has been sharply negative. Care to respond?
11 Jul 04:55

07/5/13 PHD comic: 'Written'

Piled Higher & Deeper by Jorge Cham
www.phdcomics.com
title: "Written" - originally published 7/5/2013

For the latest news in PHD Comics, CLICK HERE!

07 Jul 17:34

‘Ladies Still not Empowered in Kerala?’ Questions Raised by the Solar Scam

by jdevika
How does one respond critically and effectively when non-politics, non-government, and non-sense, all rolled together, assail the political public? I have been thinking about this recently — surely, this is a question that troubles all those who would wish to keep the focus of public life on politics and power. We witness, in present-day Kerala,politics […]
27 Jun 09:06

Prometheus

'I'm here to return what Prometheus stole.' would be a good thing to say if you were a fighter pilot in a Michael Bay movie where for some reason the world's militaries had to team up to defeat every god from human mythology, and you'd just broken through the perimeter and gotten a missile lock on Mount Olympus.
16 Jun 13:42

06/5/13 PHD comic: 'Friend Request'

Piled Higher & Deeper by Jorge Cham
www.phdcomics.com
title: "Friend Request" - originally published 6/5/2013

For the latest news in PHD Comics, CLICK HERE!

05 Jun 17:11

SpecGram—The Speculative Grammarian Survey of Grammar Writers—The Writing Process—Morris Swadesh III

For the past 42 months, Speculative Grammarian’s Office of Linguistic Documentation has conducted an extensive survey of linguists who have published descriptive grammars. Over 600 grammar writers responded to our extensive questionnaire, covering all areas of data-gathering, analysis, theory, and the processes of writing and publishing.
05 Jun 13:42

06/3/13 PHD comic: 'Professor Proverbs'

Piled Higher & Deeper by Jorge Cham
www.phdcomics.com
title: "Professor Proverbs" - originally published 6/3/2013

For the latest news in PHD Comics, CLICK HERE!

05 May 18:06

New blog on history and philosophy of language sciences

by Barbara Partee

There’s a new blog, “History and Philosophy of the Language Sciences”, edited by James McElvenny at the University of Sydney. I’m the invited author of the third post in it, ‘On the history of the question of whether natural language is “illogical”’, which came out on May 1. For now, new posts are planned weekly. Here’s the blog address: http://hiphilangsci.com.

Let any interested friends know about it, because there is a desire for good discussion of the entries and for interesting new posts.

05 May 09:37

SpecGram—Handbook for Linguistic Elicitation, Volume 28: Laziness and Inactivity—Book Announcement from Psammeticus Press

Volume 28 of the Handbook for Linguistic Elicitation focuses on that essential of human nature, laziness. Everybody knows that linguists need to elicit words for activities like “killing,” “breaking” and “eating,” but let’s face it, we can’t stop there. And truth be told, these “highly transitive” verbs just aren’t that important to most actual people.
23 Apr 11:05

Linguistics flowchart: What field?

by Joe
This is floating around fb at the moment (yes, click to embiggen). ... I saw it first on the Studentische Tagung Sprachwissenschaft page. The poster itself is from Cascadilla, the only source a linguist needs to stay stocked with excellent posters, t-shirts, bumper stickers and gifts. This is on their CafePress link as a poster. I only today saw the 'Loose lips make bilabial trills' poster. 


But somehow, the chart doesn't help me ... I want to do basically all of it.
23 Apr 10:53

A note to Readers

by noreply@blogger.com (Norbert)

I have been blogging here at Faculty of Language now for a little over 6 months. An unexpected pleasure has come from reading the comments. Most have been cogent and provocative, and even when I did not fully agree with the point advanced, I found it helpful to think through (at least in part) what background assumptions motivated the comment and how these related to what I tend to hold true (or true enough to explore).  However, the real pleasure has come not from these edifying remarks. Some (in the logical sense of ‘at least one’) commentators have granted me a new discovery: there exists a very robust new rule of inference that seems as natural to some as their accents – modus non sequitur (MNS).  MNS has the intriguing form first theoretically identified by Sid Morgenbesser over 35 years ago; if P why not Q.  This is a very powerful argument form, licensing any conclusion from any set of premises. Furthermore, it provides all the structure needed for vigorous comment.  Masters of this principle of reasoning can go on (seemingly) forever tying apparently inconsistent propositions together into a marvelous colorful skein of mangled thought.  Magical realist literature has more logical glue than these productions, and, for sheer entertainment value, I cannot recommend them highly enough. Fortunately, those blessed with this turn of mind feel it is an almost holy obligation to weigh in on most every topic at great length, spreading joyful confusion all around.  I had hoped this blog would promote reasoned discussion of topics central to contemporary Generative Grammar.  I did not expect it to also showcase some of the finest examples of contemporary stream of consciousness “thought.” So thanks: both to the thoughtful and, especially, to the entertaining. The former for making me think and the latter for making me laugh and laugh and laugh. Thx.

23 Apr 06:09

Public Intellectualism in Comparative Context: Different Countries, Different Disciplines

by J. Bradford DeLong

Screenshot 4 22 13 12 56 PM

Public Intellectualism in Comparative Context: Different Countries, Different Disciplines

Schedule

Participate in "Public Intellectualism" Conference Sessions Remotely // News // Advanced Study // University of Notre Dame:

If you are unable to attend the "Public Intellectualism" conference this Monday through Wednesday, April 22-24, you can still take part in the conference sessions and engage the discussants over the web. To participate remotely, for FREE, check out our conference blog at http://blogs.nd.edu/ndias/ -- on the blog's Event pages you can watch the conference sessions LIVE via Web Simulcast and ask questions to the conference presenters and commentators in the Comment field at the bottom of the pages. We encourage all of our virtual participants to ask questions early and often.


Streaming live video by Ustream

This international conference, taking place April 22-24, 2013 in McKenna Hall's Notre Dame Conference Center at the University of Notre Dame, will focus on the roles played by public intellectuals—persons who exert a large influence in the contemporary society of their countries by virtue of their thought, writing, or speaking—in various countries around the world and in their different professional roles. Leading experts from multiple disciplines will come together to approach this elusive topic of public intellectualism from different perspectives.

23 Apr 06:06

04/19/13 PHD comic: 'How you spend your time'

Hima2303

Phd scholars across the world, unite! :)

Piled Higher & Deeper by Jorge Cham www.phdcomics.com
title: "How you spend your time" - originally published 4/19/2013

For the latest news in PHD Comics, CLICK HERE!

23 Apr 06:01

Silence

All music is just performances of 4'33" in studios where another band happened to be playing at the time.
02 Apr 13:11

How Much Longer Until Humanity Becomes A Hive Mind?

by George Dvorsky


Last month, researchers created an electronic link between the brains of two rats separated by thousands of miles. This was just another reminder that technology will one day make us telepaths. But how far will this transformation go? And how long will it take before humans evolve into a fully-fledged hive mind? We spoke to the experts to find out.

I spoke to three different experts, all of whom have given this subject considerable thought: Kevin Warwick, a British scientist and professor of cybernetics at the University of Reading; Ramez Naam, an American futurist and author of NEXUS (a scifi novel addressing this topic); and Anders Sandberg, a Swedish neuroscientist from the Future of Humanity Institute at the University of Oxford.

They all told me that the possibility of a telepathic noosphere is very real — and it's closer to reality than we might think. And not surprisingly, this would change the very fabric of the human condition.

Connecting brains

My first question to the group had to do with the technological requirements. How is it, exactly, that we’re going to connect our minds over the Internet, or some future manifestation of it?

“I really think we have sufficient hardware available now — tools like Braingate,” says Warwick. “But we have a lot to learn with regard to how much the brain can adapt, just how many implants would be required, and where they would need to be positioned.”

Naam agrees that we’re largely on our way. He says we already have the basics of sending some sorts of information in and out of the brain. In humans, we’ve done it with video, audio, and motor control. In principle, nothing prevents us from sending that data back and forth between people.

“Practically speaking, though, there are some big things we have to do,” he tells io9. “First, we have to increase the bandwidth. The most sophisticated systems we have right now use about 100 electrodes, while the brain has more than 100 billion neurons. If you want to get good fidelity on the stuff you’re beaming back and forth between people, you’re going to want to get on the order of millions of electrodes.”

Naam says we can build the electronics for that easily, but building it in such a way that the brain accepts it is a major challenge.

The second hurdle, he says, is going beyond sensory and motor control.

“If you want to beam speech between people, you can probably tap into that with some extensions of what we’ve already been doing, though it will certainly involve researchers specifically working on decoding that kind of data,” he says. “But if you want to go beyond sending speech and get into full blown sharing of experiences, emotions, memories, or even skills (a la The Matrix), then you’re wandering into unknown territory.”

Indeed, Sandberg says that picking up and translating brain signals will be a tricky matter.

“EEG sensors have lousy resolution — we get an average of millions of neurons, plus electrical noise from muscles and the surroundings,” he says. “Subvocalisation and detecting muscle twitches is easier to do, although they will still be fairly noisy. Internal brain electrodes exist and can get a lot of data from a small region, but this of course requires brain surgery. I am having great hopes for optogenetics and nanofibers for making kinder, gentler implants that are less risky to insert and easier on their tissue surroundings.”

The real problem, he says, is translating signals in a sensible way. “Your brain representation of the concept "mountain" is different from mine, the result not just of different experiences, but also on account of my different neurons. So, if I wanted to activate the mountain concept, I would need to activate a disperse, perhaps very complex network across your brain,” he tells io9. “That would require some translation that figured out that I wanted to suggest a mountain, and found which pattern is your mountain.”

Sandberg says we normally "cheat" by learning a convenient code called language, where all the mapping between the code and our neural activations is learned as we grow. We can, of course, learn new codes as adults, and this is rarely a problem — adults already master things like Morse code, SMS abbreviations, or subtle signs of gesture and style. Sandberg points to the recent experiments by Nicolelis connecting brains directly, research which shows that it might be possible to get rodents to learn neural codes. But he says this learning is cumbersome, and we should be able to come up with something simpler.

One way is to boost learning. Some research shows that amphetamine and presumably other learning stimulants can speed up language learning. Recent work on the Nogo Receptor suggests that brain plasticity can be turned on and off. “So maybe we can use this to learn quickly,” says Sandberg.

Another way is to have software do the translation. It is not hard to imagine machine learning to figure out what neural codes or mumbled keywords correspond to which signal — but setting up the training so that users find it acceptably fast is another matter.

“So my guess is that if pairs of people really wanted to ‘get to know each other’ and devoted a lot of time and effort, they could likely learn signals and build translation protocols that would allow a lot of ‘telepathic’ communication — but it would be very specific to them, like the ‘internal language’ some couples have,” says Sandberg. “For the weaker social links, where we do not want to spend months learning how to speak to each other, we would rely on automatically translated signals. A lot of it would be standard things like voice and text, but one could imagine adding supporting ‘subtitles’ showing graphics or activating some neural assemblies.”

Bridging the gap

In terms of the communications backbone, Sandberg believes it’s largely in place, but it will likely have to be extended much further.

“The theoretical bandwidth limitations of even a wireless Internet are far, far beyond the bandwidth limitations of our brains — tens of terabits per second,” he told me, “and there are orbital angular momentum methods that might get far more.”

Take the corpus callosum, for example. It has around 250 million axons, and even at the maximal neural firing rate of just 25 gigabits, that should be enough to keep the hemispheres connected such that we feel we are a single mind.

As for the interface, Warwick says we should stick to implanted multi-electrode arrays. These may someday become wireless, but they’ll have to remain wired until we learn more about the process. Like Sandberg, he adds that we’ll also need to develop adaptive software interfacing.

Naam envisions something laced throughout the brain, coupled with some device that could be worn on the person’s body.

“For the first part, you can imagine a mesh of nano-scale sensors either inserted through a tiny hole in the skull, or somehow through the brain’s blood vessels. In Nexus I imagined a variant on this — tiny nano-particles that are small enough that they can be swallowed and will then cross the blood-brain barrier and find their way to neurons in the brain.”

Realistically, Naam says that whatever we insert in the brain is going to be pretty low energy consumption. The implant, or mesh, or nano-particles could communicate wirelessly, but to boost their signal — and to provide them power — scientists will have to pair them with something the person wears, like a cap, a pair of glasses, a headband — anything that can be worn very near the brain so it can pick up those weak signals and boost them, including signals from the outside world that will be channeled into the brain.

How soon before the hive mind?

Warwick believes that the technologies required to build an early version of the telepathic noosphere are largely in place. All that’s required, he says, is “money on the table” and the proper ethical approval.

Sandberg concurs, saying that we’re already doing it with cellphones. He points to the work of Charles Stross, who suggests that the next generation will never have to be alone, get lost, or forget anything.

“As soon as people have persistent wearable systems that can pick up their speech, I think we can do a crude version,” says Sandberg. “Having a system that’s on all the time will allow us to get a lot of data — and it better be unobtrusive. I would not be surprised to see experiments with Google Glasses before the end of the year, but we’ll probably end up saying it’s just a fancy way of using cellphones.”

At the same time, Sandberg suspects that “real” neural interfacing will take a while, since it needs to be safe, convenient, and have a killer app worth doing. It will also have to compete with existing communications systems and their apps.

Similarly, Naam says we could build a telepathic network in a few years, but with “very, very, low fidelity.” But that low fidelity, he says, would be considerably worse than the quality we get by using phones — or even text or IM. “I doubt anyone who’s currently healthy would want to use it.”

But for a really stable, high bandwidth system in and out of the brain, that could take upwards of 15 to 20 years, which Naam concedes is optimistic.

“In any case, it’s not a huge priority,” he says. “And it’s not one where we’re willing to cut corners today. It’s firmly in the medical sphere, and the first rule there is ‘do no harm’. That means that science is done extremely cautiously, with the priority overwhelmingly — and appropriately — being not to harm the human subject.”

Nearly supernatural

I asked Sandberg how the telepathic noosphere will disrupt the various way humans engage in work and social relations.

“Any enhancement of communication ability is a big deal,” he responded. “We humans are dominant because we are so good at communication and coordination, and any improvement would likely boost that. Just consider flash mobs or how online ARG communities do things that seem nearly supernatural.”

Cell phones, he says, made our schedules flexible in time and space, allowing us to coordinate where to meet on the fly. He says we’re also adding various non-human services like apps and Siri-like agents. “Our communications systems are allowing us to interact not just with each other but with various artificial agents,” he says. Messages can be stored, translated and integrated with other messages.

“If we become telepathic, it means we will have ways of doing the same with concepts, ideas and sensory signals,” says Sandberg. “It is hard to predict just what this will be used for since there are so few limitations. But just consider the possibility of getting instruction and skills via augmented reality and well designed sensory/motor interfaces. A team might help a member perform actions while ‘looking over her shoulder’, as if she knew all they knew. And if the system is general enough, it means that you could in principle get help from any skilled person anywhere in the world.”

In response to the same question, Naam noted that communication boosts can accelerate technical innovation, but more importantly, they can also accelerate the spread of any kind of idea. “And that can be hugely disruptive,” he says.

But in terms of the possibilities, Naam says the sky’s the limit.

“With all of those components, you can imagine people doing all sorts of things with such an interface. You could play games together. You could enter virtual worlds together,” he says. “Designers or architects or artists could imagine designs and share them mentally with others. You could work together on any type of project where you can see or hear what you’re doing. And of course, sex has driven a lot of information technologies forward — with sight, sound, touch, and motor control, you could imagine new forms of virtual sex or virtual pornography.”

Warwick imagines communication in the broadest sense, including the technically-enabled telepathic transmission of feelings, thoughts, ideas, and emotions. “I also think this communication will be far richer when compared to the present pathetic way in which humans communicate.” He suspects that visual information may eventually be possible, but that will take some time to develop. He even imagines the sharing of memories. That may be possible, he says, “but maybe not in my lifetime.”

Put all this together, says Warwick, and “the body becomes redundant.” Moreover, when connected in this way “we will be able to understand each other much more.”

A double-edged sword

We also talked about the potential risks.

“There’s the risk of bugs in hardware or software,” says Naam. “There’s the risk of malware or viruses that infect this. There’s the risk of hackers being able to break into the implants in your head. We’ve already seen hackers demonstrate that they can remotely take over pacemakers and insulin pumps. The same risks exist here.”

But the big societal risk, says Naam, stems entirely from the question of who controls this technology.

“That’s the central question I ask in Nexus,” he says. “If we all have brain implants, you can imagine it driving a very bottom’s up world — another Renaissance, a world where people are free and creating and sharing more new ideas all the time. Or you can imagine it driving a world like that of 1984, where central authorities are the ones in control, and they’re the ones using these direct brain technologies to monitor people, to keep people in line, or even to manipulate people into being who they’re supposed to be. That’s what keeps me up at night.”

Warwick, on the other hand, told me that the “biggest risk is that some idiot — probably a politician or business person — may stop it from going ahead.” He suspects it will lead to a digital divide between those who have and those who do not, but that it’s a natural progression very much in line with evolution to date.

In response to the question of privacy, Sandberg quipped, “Privacy? What privacy?”

Our lives, he says, will reside in the cloud, and on servers owned by various companies that also sell results from them to other organizations.

“Even if you do not use telepathy-like systems, your behaviour and knowledge can likely be inferred from the rich data everybody else provides,” he says. “And the potential for manipulation, surveillance and propaganda are endless.”

Our cloud exoselves

Without a doubt, the telepathic noosphere will alter the human condition in ways we cannot even begin to imagine. The Noosphere will be an extension of our minds. And as David Chalmers and Andy Clark have noted, we should still regard external mental processes as being genuine even though they’re technically happening outside our skulls. Consequently, as Sandberg told me, our devices and “cloud exoselves” will truly be extensions of our minds.

“Potentially very enhancing extensions,” he says, “although unlikely to have much volition of their own.”

Sandberg argues that we shouldn’t want our exoselves to be too independent, since they’re likely to make mistakes in our name. “We will always want to have veto power, a bit like how the conscious level of our minds has veto on motor actions being planned,” he says.

Veto power over our cloud exoselves? The future will be a very strange place, indeed.

Top image: agsandrew/Shutterstock, Nicolesis lab.

02 Apr 13:05

SpecGram—Restored Things You Didn’t Know You Didn’t Know—Madalena Cruz-Ferreira

This 29th collection of students’ pearls of wisdom, laboriously digitised from hand-written papers, demonstrates once again how students new to the study of language speculate about grammar after having imperfectly absorbed what their teachers think they have taught them.