Shared posts

19 May 03:43

A unified coding strategy for processing faces and voices


faces, voices

Galit Yovel, Pascal Belin. Both faces and voices are rich in socially-relevant information, which humans are remarkably adept at extracting, including a person's identity, age, gender, affective state, personality, etc. Her....
19 May 03:37

The Electrophysiological Underpinnings of Processing Gender Stereotypes in Language

by Anna Siyanova-Chanturia et al.

gender N400

by Anna Siyanova-Chanturia, Francesca Pesciarelli, Cristina Cacciari

Despite the widely documented influence of gender stereotypes on social behaviour, little is known about the electrophysiological substrates engaged in the processing of such information when conveyed by language. Using event-related brain potentials (ERPs), we examined the brain response to third-person pronouns (lei “she” and lui “he”) that were implicitly primed by definitional (passeggeraFEM “passenger”, pensionatoMASC “pensioner”), or stereotypical antecedents (insegnante “teacher”, conducente “driver”). An N400-like effect on the pronoun emerged when it was preceded by a definitionally incongruent prime (passeggeraFEMlui; pensionatoMASClei), and a stereotypically incongruent prime for masculine pronouns only (insegnante – lui). In addition, a P300-like effect was found when the pronoun was preceded by definitionally incongruent primes. However, this effect was observed for female, but not male participants. Overall, these results provide further evidence for on-line effects of stereotypical gender in language comprehension. Importantly, our results also suggest a gender stereotype asymmetry in that male and female stereotypes affected the processing of pronouns differently.
19 May 03:36

Learning Multisensory Integration and Coordinate Transformation via Density Estimation

by Joseph G. Makin et al.


by Joseph G. Makin, Matthew R. Fellows, Philip N. Sabes

Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.
19 May 03:35

Sparse Coding Can Predict Primary Visual Cortex Receptive Field Changes Induced by Abnormal Visual Input

by Jonathan J. Hunt et al.


by Jonathan J. Hunt, Peter Dayan, Geoffrey J. Goodhill

Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.
19 May 03:30

Top-down influences on visual processing

by Charles D. Gilbert

Nature Reviews Neuroscience 14, 350 (2013). doi:10.1038/nrn3476

Authors: Charles D. Gilbert & Wu Li

Re-entrant or feedback pathways between cortical areas carry rich and varied information about behavioural context, including attention, expectation, perceptual tasks, working memory and motor commands. Neurons receiving such inputs effectively function as adaptive processors that are able to assume different functional states according to the

19 May 03:28

An ear for statistics

by Israel Nelken

summary statistics

Nature Neuroscience 16, 381 (2013). doi:10.1038/nn.3360

Authors: Israel Nelken & Alain de Cheveigné

A study finds that sound textures are stored in auditory memory as summary statistics representing the sound over long time scales; specific events are superimposed, forming a 'skeleton of events on a bed of texture'.

19 May 03:27

The Neural Basis of Inhibitory Effects of Semantic and Phonological Neighbors in Spoken Word Production

by (Daniel Mirman et al)
Journal of Cognitive Neuroscience, Volume 25, Issue 9, Page 1504-1516, September 2013.
19 May 03:26

Seeing Objects through the Language Glass

by (Bastien Boutonnet et al)
Journal of Cognitive Neuroscience, Volume 25, Issue 10, Page 1702-1710, October 2013.
19 May 03:26

Different Synchronization Rules in Primary and Nonprimary Auditory Cortex of Monkeys

by (Michael Brosch et al)
Journal of Cognitive Neuroscience, Volume 25, Issue 9, Page 1517-1526, September 2013.
19 May 03:26

Neurobiological Systems for Lexical Representation and Analysis in English

by (Mirjana Bozic et al)
Journal of Cognitive Neuroscience, Volume 25, Issue 10, Page 1678-1691, October 2013.
19 May 03:24

Race perception isn’t automatic

by tomstafford


Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.

For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.

When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.

Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.

Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.

Powerful force

The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).

Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.

It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.

So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).

Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.

19 May 03:21

Hearing Silences: Human Auditory Processing Relies on Preactivation of Sound-Specific Brain Activity Patterns

by SanMiguel, I., Widmann, A., Bendixen, A., Trujillo-Barreto, N., Schroger, E.

The remarkable capabilities displayed by humans in making sense of an overwhelming amount of sensory information cannot be explained easily if perception is viewed as a passive process. Current theoretical and computational models assume that to achieve meaningful and coherent perception, the human brain must anticipate upcoming stimulation. But how are upcoming stimuli predicted in the brain? We unmasked the neural representation of a prediction by omitting the predicted sensory input. Electrophysiological brain signals showed that when a clear prediction can be formulated, the brain activates a template of its response to the predicted stimulus before it arrives to our senses.

19 May 03:20

Learning Theory Approach to Minimum Error Entropy Criterion; Ting Hu, Jun Fan, Qiang Wu, Ding-Xuan Zhou; 14(Feb):377--397, 2013.

We consider the minimum error entropy (MEE) criterion and an empirical risk minimization learning algorithm when an approximation of Rényi's entropy (of order 2) by Parzen windowing is minimized. This learning algorithm involves a Parzen windowing scaling parameter. We present a learning theory approach for this MEE algorithm in a regression setting when the scaling parameter is large. Consistency and explicit convergence rates are provided in terms of the approximation ability and capacity of the involved hypothesis space. Novel analysis is carried out for the generalization error associated with Rényi's entropy and a Parzen windowing function, to overcome technical difficulties arising from the essential differences between the classical least squares problems and the MEE setting. An involved symmetrized least squares error is introduced and analyzed, which is related to some ranking algorithms.
19 May 03:20

Bayesian Nonparametric Hidden Semi-Markov Models; Matthew J. Johnson, Alan S. Willsky; 14(Feb):673--701, 2013.

There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) as a natural Bayesian nonparametric extension of the ubiquitous Hidden Markov Model for learning from sequential and time-series data. However, in many settings the HDP-HMM's strict Markovian constraints are undesirable, particularly if we wish to learn or encode non-geometric state durations. We can extend the HDP-HMM to capture such structure by drawing upon explicit-duration semi-Markov modeling, which has been developed mainly in the parametric non-Bayesian setting, to allow construction of highly interpretable models that admit natural prior information on state durations. In this paper we introduce the explicit-duration Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM) and develop sampling algorithms for efficient posterior inference. The methods we introduce also provide new methods for sampling inference in the finite Bayesian HSMM. Our modular Gibbs sampling methods can be embedded in samplers for larger hierarchical Bayesian models, adding semi-Markov chain modeling as another tool in the Bayesian inference toolbox. We demonstrate the utility of the HDP-HSMM and our inference methods on both synthetic and real experiments.
19 May 03:19

Semi-Supervised Learning Using Greedy Max-Cut; Jun Wang, Tony Jebara, Shih-Fu Chang; 14(Mar):771--800, 2013.

Graph-based semi-supervised learning (SSL) methods play an increasingly important role in practical machine learning systems, particularly in agnostic settings when no parametric information or other prior knowledge is available about the data distribution. Given the constructed graph represented by a weight matrix, transductive inference is used to propagate known labels to predict the values of all unlabeled vertices. Designing a robust label diffusion algorithm for such graphs is a widely studied problem and various methods have recently been suggested. Many of these can be formalized as regularized function estimation through the minimization of a quadratic cost. However, most existing label diffusion methods minimize a univariate cost with the classification function as the only variable of interest. Since the observed labels seed the diffusion process, such univariate frameworks are extremely sensitive to the initial label choice and any label noise. To alleviate the dependency on the initial observed labels, this article proposes a bivariate formulation for graph-based SSL, where both the binary label information and a continuous classification function are arguments of the optimization. This bivariate formulation is shown to be equivalent to a linearly constrained Max-Cut problem. Finally an efficient solution via greedy gradient Max-Cut (GGMC) is derived which gradually assigns unlabeled vertices to each class with minimum connectivity. Once convergence guarantees are established, this greedy Max-Cut based SSL is applied on both artificial and standard benchmark data sets where it obtains superior classification accuracy compared to existing state-of-the-art SSL methods. Moreover, GGMC shows robustness with respect to the graph construction method and maintains high accuracy over extensive experiments with various edge linking and weighting schemes.
19 May 03:19

A Widely Applicable Bayesian Information Criterion; Sumio Watanabe; 14(Mar):867--897, 2013.

A statistical model or a learning machine is called regular if the map taking a parameter to a probability distribution is one-to-one and if its Fisher information matrix is always positive definite. If otherwise, it is called singular. In regular statistical models, the Bayes free energy, which is defined by the minus logarithm of Bayes marginal likelihood, can be asymptotically approximated by the Schwarz Bayes information criterion (BIC), whereas in singular models such approximation does not hold. Recently, it was proved that the Bayes free energy of a singular model is asymptotically given by a generalized formula using a birational invariant, the real log canonical threshold (RLCT), instead of half the number of parameters in BIC. Theoretical values of RLCTs in several statistical models are now being discovered based on algebraic geometrical methodology. However, it has been difficult to estimate the Bayes free energy using only training samples, because an RLCT depends on an unknown true distribution. In the present paper, we define a widely applicable Bayesian information criterion (WBIC) by the average log likelihood function over the posterior distribution with the inverse temperature 1/log n, where n is the number of training samples. We mathematically prove that WBIC has the same asymptotic expansion as the Bayes free energy, even if a statistical model is singular for or unrealizable by a statistical model. Since WBIC can be numerically calculated without any information about a true distribution, it is a generalized version of BIC onto singular statistical models.
19 May 03:19

GPstuff: Bayesian Modeling with Gaussian Processes; Jarno Vanhatalo, Jaakko Riihimäki, Jouni Hartikainen, Pasi Jylänki, Ville Tolvanen, Aki Vehtari; 14(Apr):1175--1179, 2013.

The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.
19 May 03:17

Phase-Locked Responses to Speech in Human Auditory Cortex are Enhanced During Comprehension

by Peelle, J. E., Gross, J., Davis, M. H.


A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners’ ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.

19 May 03:16

Two Distinct Ipsilateral Cortical Representations for Individuated Finger Movements

by Diedrichsen, J., Wiestler, T., Krakauer, J. W.

Movements of the upper limb are controlled mostly through the contralateral hemisphere. Although overall activity changes in the ipsilateral motor cortex have been reported, their functional significance remains unclear. Using human functional imaging, we analyzed neural finger representations by studying differences in fine-grained activation patterns for single isometric finger presses. We demonstrate that cortical motor areas encode ipsilateral movements in 2 fundamentally different ways. During unimanual ipsilateral finger presses, primary sensory and motor cortices show, underneath global suppression, finger-specific activity patterns that are nearly identical to those elicited by contralateral mirror-symmetric action. This component vanishes when both motor cortices are functionally engaged during bimanual actions. We suggest that the ipsilateral representation present during unimanual presses arises because otherwise functionally idle circuits are driven by input from the opposite hemisphere. A second type of representation becomes evident in caudal premotor and anterior parietal cortices during bimanual actions. In these regions, ipsilateral actions are represented as nonlinear modulation of activity patterns related to contralateral actions, an encoding scheme that may provide the neural substrate for coordinating bimanual movements. We conclude that ipsilateral cortical representations change their informational content and functional role, depending on the behavioral context.

19 May 03:16

The Early Spatio-Temporal Correlates and Task Independence of Cerebral Voice Processing Studied with MEG

by Capilla, A., Belin, P., Gross, J.

Functional magnetic resonance imaging studies have repeatedly provided evidence for temporal voice areas (TVAs) with particular sensitivity to human voices along bilateral mid/anterior superior temporal sulci and superior temporal gyri (STS/STG). In contrast, electrophysiological studies of the spatio-temporal correlates of cerebral voice processing have yielded contradictory results, finding the earliest correlates either at ~300–400 ms, or earlier at ~200 ms ("fronto-temporal positivity to voice", FTPV). These contradictory results are likely the consequence of different stimulus sets and attentional demands. Here, we recorded magnetoencephalography activity while participants listened to diverse types of vocal and non-vocal sounds and performed different tasks varying in attentional demands. Our results confirm the existence of an early voice-preferential magnetic response (FTPVm, the magnetic counterpart of the FTPV) peaking at about 220 ms and distinguishing between vocal and non-vocal sounds as early as 150 ms after stimulus onset. The sources underlying the FTPVm were localized along bilateral mid-STS/STG, largely overlapping with the TVAs. The FTPVm was consistently observed across different stimulus subcategories, including speech and non-speech vocal sounds, and across different tasks. These results demonstrate the early, largely automatic recruitment of focal, voice-selective cerebral mechanisms with a time-course comparable to that of face processing.

19 May 02:52

Surprisal, the PDC, and the primary locus of processing difficulty in relative clauses

Roger Levy and Edward Gibson
19 May 02:52

Early ERPs to faces: aging, luminance, and individual differences

Magdalena M. Bieniek, Luisa S. Frei and Guillaume A. Rousselet
19 May 02:52

Training to Improve Language Outcomes in Cochlear Implant Recipients

Erin M. Ingvalson and Patrick C. M. Wong
19 May 02:51

What is Bottom-Up and What is Top-Down in Predictive Coding?

Karsten Rauss and Gilles Pourtois
19 May 02:51

Syntactic Computations in the Language Network: Characterizing Dynamic Network Properties Using Representational Similarity Analysis

Lorraine K. Tyler, Teresa P. L. Cheung, Barry J. Devereux and Alex Clarke