Picking out your name in a noisy cocktail party requires a trained brain

We’ve all had some experience of a moment when we respond to the sound of our name said in a crowded, noisy room. The “Cocktail Party Effect” as this has become known as, is indicative of the underlying complex data processing that goes on in the brain.

To unpack it a little it is necessary to use a metaphor and some terminology. For the metaphor consider that the person at the cocktail party whose name will be mentioned is a buoy floating in a sea of data equipped with a sensor that captures every nuance of the waves it is floating upon. Everything that happens at that party is data. The music, the voices, the conversations, the décor, the smells of food and drink, the body language of the participants, the clothes, the perfumes, the facial expressions, the gestures, the language used, the tone of voice, the content of the conversations, the context of the conversations, the context of the occasion, the status of each participant, the perceived importance of the event, the perceived status of each participant, the accessories each is wearing … the list goes on and on and on as each permutation impacts upon the rest and resolves in yet more data. 

The brain receives all that data. Our senses of sight, hearing, touch, smell and taste send a plethora of captured data to our brain which processes it. The metaphor however is accurate only if we assume that the person is a machine and the brain is a CPU, neither of which are true. As highly evolved organic beings our brains possess quite a narrow processing capacity. Although our brain is able to perform more than 38 thousand trillion operations per second and hold about 3,584 terabytes of memory, processing all that data in a meaningful way would lead to a rapid exhaustion of all available energy we can muster and either paralysis of our ability to analyze what’s in front of us or worse, a sense of deep anxiety stemming from deep exhaustion that renders us incapable of socializing. 

The brain cleverly avoids all this by applying a series of filters which is where the terminology comes in. First, there is the binaural effect. The brain uses both ears to model a sound field which then allows it to gauge direction and distance of a sound source which then reveals its location.

If that sounds incredibly taxing for the brain consider that it’s taking place while, consciously, we are scanning our surroundings and maybe already interacting with some of the people around us, already. To cope the brain implements inattentional blindness which is essentially a mechanism whereby particular stimuli are ignored either because the brain has already made assumptions that render them obsolete (hence perceptual bias) or has simply grown too tired from all the aural processing it is carrying out to deal with them. 

So, we should then be incapable of picking out our name being mentioned through all that noisy data. That would mean that the brain not only monitors the background data at a subconscious level, looking for some very specific things, but it is also capable of dialing up or dialing down some of the data input fields so we can then focus on what is important to us. 

A new study suggests that this is exactly what is happening, but before we even get to it it’s important to cover one more of the brain’s mechanisms (and some more terminology). The figure-ground effect is a mental interpretational mechanism that takes in data and ascribes meaning to it in relation to the background it is in or the data that surrounds it. It is most often seen in the visual field when we have to decide, for example, the classic “is it two faces looking at each other or is it a vase?” question. The brain can apparently also apply this filtering to sounds making a room just a collection of clutter noise or, dulling the noise into the background and picking out what’s important to us. In this case our name. 

It sounds like a superpower, which in a way it is and, this is where it gets interesting. The brain can, apparently be primed to listen for specific sounds in a noisy environment, successfully filtering them out of the confusing backdrop. That means that it can be trained. A trained brain has better representational models of the world based upon experience and a better understanding of its underlying mechanics that allow it to function at a higher level of performance.  

A trained brain deals with perception differently. It uses experience as an additional layer of filtering to get away from the effect that allows us to see the world expect to see (selective blindness, like in the 2010 viral video where practically everyone missed seeing the gorilla) and actually see a broader, more relevant picture of the world we need to see.

This transforms us from casual bystanders at a cocktail party, overwhelmed by the noise it generates, our brains busy shutting down perceptual analysis in order to survive, to James Bond types, who use our mental resources selectively to progressively see more of the world we live in and do more in it than other people. 

Just how to train your brain to do this without putting your name to the Official Secrets Act and signing up for the MI6 is the subject of my latest book: The Sniper Mind: Eliminate Fear, Deal with Uncertainty, and Make Better Decisions. You are here. You know what to do. 

Sign up for your weekly dose of mental oomph

Get the eBook version using this country-specific link.