Idsardi
Shared posts
A unified coding strategy for processing faces and voices
Idsardifaces, voices
The Electrophysiological Underpinnings of Processing Gender Stereotypes in Language
Idsardigender N400
by Anna Siyanova-Chanturia, Francesca Pesciarelli, Cristina Cacciari
Despite the widely documented influence of gender stereotypes on social behaviour, little is known about the electrophysiological substrates engaged in the processing of such information when conveyed by language. Using event-related brain potentials (ERPs), we examined the brain response to third-person pronouns (lei “she” and lui “he”) that were implicitly primed by definitional (passeggeraFEM “passenger”, pensionatoMASC “pensioner”), or stereotypical antecedents (insegnante “teacher”, conducente “driver”). An N400-like effect on the pronoun emerged when it was preceded by a definitionally incongruent prime (passeggeraFEM – lui; pensionatoMASC – lei), and a stereotypically incongruent prime for masculine pronouns only (insegnante – lui). In addition, a P300-like effect was found when the pronoun was preceded by definitionally incongruent primes. However, this effect was observed for female, but not male participants. Overall, these results provide further evidence for on-line effects of stereotypical gender in language comprehension. Importantly, our results also suggest a gender stereotype asymmetry in that male and female stereotypes affected the processing of pronouns differently.Learning Multisensory Integration and Coordinate Transformation via Density Estimation
Idsardimutisensory
by Joseph G. Makin, Matthew R. Fellows, Philip N. Sabes
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.Sparse Coding Can Predict Primary Visual Cortex Receptive Field Changes Induced by Abnormal Visual Input
Idsardisparse
by Jonathan J. Hunt, Peter Dayan, Geoffrey J. Goodhill
Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.Top-down influences on visual processing
Nature Reviews Neuroscience 14, 350 (2013). doi:10.1038/nrn3476
Authors: Charles D. Gilbert & Wu Li
Re-entrant or feedback pathways between cortical areas carry rich and varied information about behavioural context, including attention, expectation, perceptual tasks, working memory and motor commands. Neurons receiving such inputs effectively function as adaptive processors that are able to assume different functional states according to the
An ear for statistics
Idsardisummary statistics
Nature Neuroscience 16, 381 (2013). doi:10.1038/nn.3360
Authors: Israel Nelken & Alain de Cheveigné
A study finds that sound textures are stored in auditory memory as summary statistics representing the sound over long time scales; specific events are superimposed, forming a 'skeleton of events on a bed of texture'.
The Neural Basis of Inhibitory Effects of Semantic and Phonological Neighbors in Spoken Word Production
Seeing Objects through the Language Glass
Different Synchronization Rules in Primary and Nonprimary Auditory Cortex of Monkeys
Neurobiological Systems for Lexical Representation and Analysis in English
Race perception isn’t automatic
IdsardiBaugh
Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.
For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.
When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.
Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.
Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.
Powerful force
The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).
Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.
It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.
So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).
Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.
Hearing Silences: Human Auditory Processing Relies on Preactivation of Sound-Specific Brain Activity Patterns
The remarkable capabilities displayed by humans in making sense of an overwhelming amount of sensory information cannot be explained easily if perception is viewed as a passive process. Current theoretical and computational models assume that to achieve meaningful and coherent perception, the human brain must anticipate upcoming stimulation. But how are upcoming stimuli predicted in the brain? We unmasked the neural representation of a prediction by omitting the predicted sensory input. Electrophysiological brain signals showed that when a clear prediction can be formulated, the brain activates a template of its response to the predicted stimulus before it arrives to our senses.
Learning Theory Approach to Minimum Error Entropy Criterion; Ting Hu, Jun Fan, Qiang Wu, Ding-Xuan Zhou; 14(Feb):377--397, 2013.
Bayesian Nonparametric Hidden Semi-Markov Models; Matthew J. Johnson, Alan S. Willsky; 14(Feb):673--701, 2013.
Semi-Supervised Learning Using Greedy Max-Cut; Jun Wang, Tony Jebara, Shih-Fu Chang; 14(Mar):771--800, 2013.
A Widely Applicable Bayesian Information Criterion; Sumio Watanabe; 14(Mar):867--897, 2013.
GPstuff: Bayesian Modeling with Gaussian Processes; Jarno Vanhatalo, Jaakko Riihimäki, Jouni Hartikainen, Pasi Jylänki, Ville Tolvanen, Aki Vehtari; 14(Apr):1175--1179, 2013.
Phase-Locked Responses to Speech in Human Auditory Cortex are Enhanced During Comprehension
Idsardiphase
A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners’ ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.
Two Distinct Ipsilateral Cortical Representations for Individuated Finger Movements
Movements of the upper limb are controlled mostly through the contralateral hemisphere. Although overall activity changes in the ipsilateral motor cortex have been reported, their functional significance remains unclear. Using human functional imaging, we analyzed neural finger representations by studying differences in fine-grained activation patterns for single isometric finger presses. We demonstrate that cortical motor areas encode ipsilateral movements in 2 fundamentally different ways. During unimanual ipsilateral finger presses, primary sensory and motor cortices show, underneath global suppression, finger-specific activity patterns that are nearly identical to those elicited by contralateral mirror-symmetric action. This component vanishes when both motor cortices are functionally engaged during bimanual actions. We suggest that the ipsilateral representation present during unimanual presses arises because otherwise functionally idle circuits are driven by input from the opposite hemisphere. A second type of representation becomes evident in caudal premotor and anterior parietal cortices during bimanual actions. In these regions, ipsilateral actions are represented as nonlinear modulation of activity patterns related to contralateral actions, an encoding scheme that may provide the neural substrate for coordinating bimanual movements. We conclude that ipsilateral cortical representations change their informational content and functional role, depending on the behavioral context.
The Early Spatio-Temporal Correlates and Task Independence of Cerebral Voice Processing Studied with MEG
Functional magnetic resonance imaging studies have repeatedly provided evidence for temporal voice areas (TVAs) with particular sensitivity to human voices along bilateral mid/anterior superior temporal sulci and superior temporal gyri (STS/STG). In contrast, electrophysiological studies of the spatio-temporal correlates of cerebral voice processing have yielded contradictory results, finding the earliest correlates either at ~300–400 ms, or earlier at ~200 ms ("fronto-temporal positivity to voice", FTPV). These contradictory results are likely the consequence of different stimulus sets and attentional demands. Here, we recorded magnetoencephalography activity while participants listened to diverse types of vocal and non-vocal sounds and performed different tasks varying in attentional demands. Our results confirm the existence of an early voice-preferential magnetic response (FTPVm, the magnetic counterpart of the FTPV) peaking at about 220 ms and distinguishing between vocal and non-vocal sounds as early as 150 ms after stimulus onset. The sources underlying the FTPVm were localized along bilateral mid-STS/STG, largely overlapping with the TVAs. The FTPVm was consistently observed across different stimulus subcategories, including speech and non-speech vocal sounds, and across different tasks. These results demonstrate the early, largely automatic recruitment of focal, voice-selective cerebral mechanisms with a time-course comparable to that of face processing.