By Margaret Mcguinness

Ever hear of “neurocinematics” – a term coined by Uri Hasson of Princeton University?

If not, it’s basically a method that has been employed by neuromarketers, using instruments that had been predominantly handled by scientists, that’s targeted towards filmmakers. Using tools such as biometric devices (to track eye movements and heart rate), EEG (to analyze brain waves), and fMRI (to record brain activity), neuromarketers can help filmmakers better understand their viewers’ reactions, whether to completed pieces, screenings, or trailers (the latest Harry Potter movie trailer employed neurocinematics).

“Under the assumption that mental states are tightly related to brain states” (a hypothesis that is widely accepted by most neuroscientists and many philosophers), Hasson and colleagues found that “some films can exert considerable control over brain activity and eye movements.”

Neuromarketers ensure the reliability of their findings using several techniques.

To provide a basis for measuring the viewers’ brain activity and to avoid measuring noise dissociated from the task at hand, neuromarketers assess the participant viewers while watching non-stimulating targets (e.g. a standard cross amidst gray background), which should elicit no response. Then, neuromarketers compare this response (or lack thereof) to that elicited from a clip. Some participants may even be asked to watch a clip up to three or four times for comparison purposes.

Because the response of one participant does not say a lot about a clip, neuromarketers use inter-subject correlation analysis (ISC) to ensure further reliability. They can “assess similarities in the spatiotemporal responses across viewers’ brains,” in which correlations can “extend far beyond the visual and auditory cortices” to other areas, such as the lateral sulcus (LS), the postcentral sulcus, and the cingulate gyrus.

In 2008, Uri Hasson and colleagues measured how viewers’ brains responded to different types of films, ranging from real-life scenarios, documentaries, and art films to Hollywood blockbusters. While Alfred Hitchcock’s big hit Bang! You’re Dead elicited the same response across all viewers in 65% of the frontal cortex, Larry David’s Curb Your Enthusiasm elicited the same response in only 18%.

Hasson alleges that the ISC level indicates the extent of control a filmmaker has over his viewers’ experiences, whether intentionally or not, leaving Hitchcock consequently with more control than Larry David.

Hasson’s team also changed the order of scenes for different participants and assessed viewers’ reactions, finding that the more coherent the scene order, the higher the ISC in parts of the brain involved in extracting meaning. Changing the order of scenes can help filmmakers determine which sequence effectively promotes their viewers’ understanding.

Phil Carlsen and Devon Hubbard of MindSign in San Diego, CA. suggest that using neurocinematics can help filmmakers decide which actor will elicit more brain activity from viewers and consequently, give them a better shot in the box office. Not only that, but the method can also help them assign movie ratings, depending on how brain areas associated with disgust and approval respond.

Carlsen has also found, not surprisingly, that 3D scenes activate the brain more so than those of 2D, particularly when viewers used modern polarized glasses over the older blue and red ones.

Neurocinematics has the ability to change the film industry immensely. Whether a filmmaker wants near-complete control or just enough to ensure his message crosses over, he can use this method to make it happen. Even the U.S. Advertising Research Foundation is seriously considering this new method, defining regulatory standards and a quality consult, says Ron Wright of Sands Research.

While some may disagree over whether neurocinematics is killing creativity or invading human interest or personal privacy, others might find it revolutionary, providing filmmakers with more opportunities to create their ideal pieces, and viewers with more engaging, worthwhile films.

Wright, along with neuromarketing consultant Roger Dooley, would likely argue that the method is far from invading human interest or privacy. Wright believes there are too many variables in determining the human mental “buy button,” which would hypothetically lead someone to spend money – in this case on a film. Dooley does not believe that neuromarketers will “ever find some sort of magic spot that will allow [them] to accurately predict whether someone will purchase a product or not.”

Neurocinematics, agreeable or not, is becoming an important element in the blending of the arts and sciences.

Blending with dozens of other fields and creating amazing products and methods out of doing so, neuroscience, in my opinion, has driven down quite the revolutionary road.

Have you seen the hit summer movie, Inception, yet? If not, I recommend you to, because it’s mind bottling (Yeah, Anchorman’s Ron Burgundy would approve). Either way though, seen it or not, the movie tweaked my curiosity about the ever-growing interaction between technology and our brains, our minds.

In the movie, Leonardo DiCaprio’s character, Cobb, is on a mission to plant an idea into another character’s mind in order to safely, legally, go home to his kids.

Cobb and his colleagues use a PASIV (Portable Automated Somnacin IntraVenous) device to access the target’s mind while he is sleeping on an airplane.

Unfortunately for Cobb’s team, the target’s mind has what Cobb calls “subconscious security,” trained mental “projections” set up in his mind to protect it from intruders. To implant the idea, they have to find a way around this security, but how? Will they make it? The movie’s a must-see – watch and find out!

So, how is this far-off movie-world of inception, dream-sharing, and mind-reading relevant, or worthy of discussion at present?

Well, haven’t fMRI results been cluing us in on some of our emotions, conscious or not? In my last post, I discussed how one’s level of empathy correlates with the activity of the ACC (anterior cingulate cortex) as recorded by an fMRI. Whether lab volunteers knew it or not, they (their brains, really) reacted more actively to the pain of those similar to them.

And now, with the rapid succession of advancements in brain-imaging technology, mind-reading and even dream-recording does not seem so unrealistic. “Subconscious security” might actually come in handy if our privacy becomes too vulnerable.

Recent articles (one and two) discuss how researchers at Northwestern University have discovered technology that can be used to “get inside the mind of a terrorist.” In their experiment, they created a mock terror-attack plan with details known and memorized only by some lab participants. The researchers found a correlation between the participants’ P300 brain waves and their guilty knowledge with “100 percent accuracy,” as J.P. Rosenfeld says.

Measuring the waves using electrodes attached to the participants’ scalps, they were able to determine whether they had prior knowledge of, or strong familiarity with, certain dates, places, times, etc.

While TIME writer Eben Harrell notes the threat this type of experiment has on privacy and its clear limitations (“confounding factors” such as familiarity, not because of guilty knowledge but of fond memories, i.e. a hometown), he also notes that interrogations can be more detrimental than these experiments. Accuracy can be improved upon simply by presenting more details to participants.

In her 2008 article, Celeste Biever tells readers how scientists, particularly Yukiyasi Kamitani and his colleagues, have come to analyze brain scans and reconstruct images the lab volunteers had seen in their mind. The scientists believe their work has potential to improve reconstructions to finer focus and even color.

John-Dylan Haynes of the Max Planck Institute in Germany, referred to in Biever’s article, says that the “next step is to find out if it is possible to image things that people are thinking of.” He even considers the possibility of making “a videotape of a dream” in the near future.

However, as Berns points out in an August 2010 article written by Graham Lawton and Clare Wilson, ethical issues would likely be raised if this brain-imaging technology begins to bind too intimately with advertising and marketing companies. People would probably feel uncomfortable if advertising becomes too knowledgeable of the workings of their minds, perhaps even enticing them to buy things they “don’t want, don’t need, and can’t afford.”

For now though, the advertising approaches using this technology seem innocent enough. With EEG machines, advertisers can determine which designs, words, or advertisements receive certain patterns of brain activity – in other words, receive the most attention from potential buyers.

Not only have EEG machines been used by advertisers, but also by sales companies – for both purposes inspiring and not so inspiring. Starting with the latter, some companies (i.e., Mattel, Emotiv) have sold cheap EEG devices to “mainstream consumers,” particularly gamers. One Californian company, NeuroSky, has built an emotion-detecting device, determining, per se, whether one is relaxed or anxious.

While not completely necessary or inspiring, devices like these are fascinating. I’d like to see what some device thinks I’m feeling, and how accurate it is.

Fortunately, there are inspiring purposes for these EEG devices. As discussed by Tom Simonite in his April 2009 article, those with disabilities or paralysis can use the devices to help control wheel chairs and even type on a computer. One engineer, Adam Wilson, even updated his Twitter using one of these systems (BCI2000): “USING EEG TO SEND TWEET.”

Not only can EEG devices help those with disabilities or paralysis, but prosthetic limbs can also. Professors at the Friedrich Schiller University of Jena, Weiss and Hofmann, discuss in their article that one of their systems allows the brain to “pick up…feedback from the prosthesis as if it was one’s own hand,” easing phantom pain.

DARPA (Defense Advanced Research Projects Agency) has allowed prostheses to advance even more than this, aspiring to bring back to wearers the experience of touch. They awarded The Johns Hopkins University Applied Physics Laboratory (APL) over $30 million to “manage and test the Modular Prosthetic Limb (MPL),” which would use a “brain-controlled interface” to perform desired actions.

Despite the hurdles along the way (as noted by Andew Nusca in his blog), the lab has released a final design, as described in a reprinted online article. It offers an amazing 22 degrees of motion, weighs the average nine-pound weight of a natural limb and responds to the wearer’s thoughts.

While the blooming relationship between technology and the brain has raised ethical questions about privacy and its use, it has also brought hope and awe to people. Maybe, with the seeming innocence of neuro-marketing thus far and the hope inspired by research and development, we won’t need to turn to “subconscious security” just yet.

What contributes more to creating a person’s identity (i.e. personality, behavior, intelligence)? Is it genetics, or is it the environment in which the person was raised? In other words, as Francis Galton might ask, is it “nature” or “nurture?”

de Waal's book on empathy

When it comes to how empathetic someone is, Frans de Waal, a Dutch primatologist and ethologist, believes it’s both nature and nurture. He says that a person’s empathy is “innate” – inherited through genes – but also that a person can learn to become more or less empathetic. That seems reasonable; depending on early experiences and education, someone may be more or less of a certain characteristic.

But how is empathy innate? Two NewScientist writers, Philip Cohen and Ewen Callaway, wrote articles discussing the areas in our brains called the anterior cingulate cortex (ACC) and the anterior insula (AI), which become active not only when we are in pain but also when others are.

Imaging studies, cited in their articles, found a positive correlation between a volunteer’s reported empathy for a person in pain and activity in the pain-processing areas of the volunteer’s brain. This has led Cohen to believe, “Humans are hardwired to feel empathy.”

For example, in a study led by Shihui Han and colleagues, “17 Chinese and 16 Caucasian (from the US, Europe and Israel) volunteers” were shown videos of strangers, both Caucasian and Chinese, in pain while their brains were scanned using fMRI. While their fMRI results suggested that they responded more empathetically towards volunteers of the same ethnicity or from the same country, their responses actually indicated they “[felt] each other’s pain about equally.”

Interestingly, our brains seem to be “hardwired” to feel more for certain groups over others, whether we notice or not. These groups appear to consist of people we can identify more with, whether through ethnicity, age, gender, or any other in-group.

Frans de Waal would find these results quite understandable. He says, “Empathy is more pronounced the more similar you are to someone, the more close, socially close, you are to someone.” He continues to say that empathy “evolved… for members of any species that is cooperative and social… it’s important to take care of others in the group because you depend on [them], you survive by [them].”

Seemingly then, our brains, and likely those of other species, have evolved to serve a survival advantage; they respond in those pain-processing areas more actively when those like us are in pain, despite what we report as our level of empathy.

While we seem to be hardwired to empathize more with certain groups over others, we’re still united as a species to empathize with one another over those of other species.

Martha Farah, a cognitive neuroscience researcher, suggests that we have a “person network” divided into persons and non-persons, which has promoted closer social bonds within our species. Farah proves that this brain network exists by considering the rare disorder prosopagnosia, which consists of “impaired visual recognition of the human face.” A specific area of the brain can be “selectively” damaged for one to obtain the disorder, demonstrating that specialized areas of the brain exist for discerning other humans.

Whether our brain also specializes in empathy towards non-persons is something to look into. For now, consider yawn contagion, which de Waal discusses with TIME about. He says there is a “deep bodily connection” that allows pets to catch yawns from their owners. This seemingly innate connection seems to break physical barriers with other animals, but what, if any, connection breaks emotional ones? And is it innate, or is it learned?

Have animal rights activists and pet lovers learned to be more empathetic towards non-persons? I’d like to think that it’s not just the influence of my environment that has led me to empathize with my childhood pets or toys – not to mention some of my favorite characters, like Hamm from Toy Story or Patrick from SpongeBob SquarePants.

Whether it is learned, innate, or both, I cannot say, but anthropomorphism seems to explain our emotional connections with non-humans. It breaks the barrier, allowing us to personify or add human characteristics to non-humans. For example, most people would probably like to think of their childhood pets as loved ones with human-like feelings and desires. However, would some stranger halfway across the world feel the same way you do about your pet? Probably not. They’d likely think of it as just another animal, simple as that.

Most people, if asked if they support animal rights, would probably answer ‘Yes’ or some derivative of that. But, would they promise to never buy any animal-based products (eggs, meat, suede, leather, or even the chinchilla coat seen on Teresa last week in The Real Housewives of New Jersey)? Most likely not. I mean, for anyone, that’s a hard promise to keep when we have other priorities.

So how do we go from talking to our pets as if they were humans to absentmindedly buying products that might contain ingredients of an animal just like our pets?

de Waal says we do this through dehumanization. We go about anthropomorphizing our favorite pets, toys, and characters just as we go about dehumanizing them. By removing human characteristics, like emotion or spoken language, we don’t have to feel as bad about buying that leather jacket we always wanted. de Waal reminds us, “We eat nonhuman animals, wear them, perform painful experiments on them, hold them captive for purposes of our own – sometimes in unhealthy condition. We make them work, and we kill them at will.”

So, the next time you shop and find that animal-based product you just NEED to buy, take a second to think about how you’re setting your priorities. Think about how, maybe unconsciously or unintentionally, you are dehumanizing the animals used for the creation of the product you’re about to buy. Couldn’t that animal be from the same species as your favorite TV character, or even your old pet? I think so, easily.