Introducing “Toward Naturalistic Interactive Neuroimaging,” Part II: Conversation with Uri Hasson
A professor in Princeton University’s psychology department, Uri Hasson will be one of five speakers in tomorrow morning’s symposium “Toward Naturalistic Interactive Neuroimaging” (8:30–11 AM, Ballroom C).

Hasson’s research draws on his background in visual neuroscience to study how the brain makes use of information acquired in real time, and to examine the synchrony between two brains during natural communication — the subject of his talk for Tuesday’s symposium.
To learn more about what drew Hasson to naturalistic neuroimaging, I met with him at SfN for a brief Q&A.
What questions are you interested in answering with your research?
Mainly we’re looking at how the brain processes real-life information. For example now, what’s happening to your brain when you listen to what I’m saying; what’s happening to my brain when I’m talking to you. We find that whenever you go to natural setups – real-life situations – many of the models that people work in the fields do not apply any more. So it becomes interesting. We have some more complicated findings.
We have two lines of research in the lab. One is memory, or processing timescale. And this, I’m not going to talk at all about tomorrow. But basically, we realize that most of the field is working in event-related designs. The problem is that, you know, there’s many dimensions of our conversation. Too many. And as a scientist, you say, “No, no, no, I need to control everything.”
So what people do, [in the] first stage, is to remove 90% of the dimensions. And one of the dimensions everyone is removing is time. You’re working in event-related design, usually, and each event, whether two hundred milliseconds or nine seconds, is independent of the next one. But actually now, what I’m saying is related to what I was saying five minutes ago, what I was saying a minute ago, and maybe to what you said over email two days ago.
That makes data analysis complicated.
Yeah. It also makes it interesting, because suddenly you realize that memory is everywhere. If you think about memory, what comes to your mind? Working memory, long-term memory, short term memory… but let’s think about working memory for a second, because we’re doing now online processing. It’s really unclear how to use this term, working memory, in the context of the conversation. What is the capacity limit of working memory? Is it five syllables? Five words? Five sentences? Five concepts?
And then also, memory is usually separated from the process, like in a computer. For a working memory task, [most researchers are] looking in the delay period, when you do nothing. Now, there is no delay. And you need to use the memory to understand what I’m saying. It’s very different from the way people think about memory currently.
[My lab has] the memory line of research, and we also have the communication line. We take two sides of communication, the speaker and the listener… You see what’s going on in the speaker’s brain, and in the listener’s brain, and [you] want to see how information now is transformed — from my brain, via this sound wave, we become coupled. I have a dynamical system in my brain, you have a dynamical system in your brain, and the sound waves couple them together.
How did you get interested in studying real-life scenarios?
I used to be a vision scientist, and we always used these highly artificial stimuli. In my Ph.D., I wanted to know what happened in real-life vision – only focusing on the visual system, not communication. And I said okay, let’s run a movie, and see how the visual system responds to movies. And we were sure that this [was] probably, you know, a fun experiment, but we’d never publish.
And then something interesting happened: I looked at the data, and developed a new way of analysis, because we didn’t know how to analyze this. Ten minutes of a movie, it’s a lot of dimensions. So the first thing we saw – I took a brain area and decided to look on the time courses, thinking that I would go back to the movie and see what [was] driving the activation. A reverse correlation. And I went to the face area and saw, each time you view the face, the fusiform face area lights up. In the movie, it was really fun to see the brain tell you what it likes.
But then I saw that if we go to my brain, and your brain, the fusiform face area will be very similar. But okay, we’re in the fusiform face area. So I started to go area by area, and I saw that about 60% of the cortex responded very similarly across people. And I said, “Wow, how can it be?” Because, you know, as a scientist, we learn if we are not controlling all the parameters, we’re going to have variability.
So we expected to see huge variability, but we saw this huge convergence – not only in the auditory cortex or the visual cortex, but in the frontal areas, in many brain areas. High order, low order – and suddenly I started to ask, “Why are people similar?” That’s what drove me to this line of research.
Why do you think naturalistic neuroimaging matters?
We want to understand how the brain is working. There is a tension, right? If you’re working in this highly complex real life parameter space, it’s difficult to know which intervening parameters are important and which are not. Let’s say you’re working in a narrative parameter space, in real life. But in your experiment you have only four variables. You control for three, and you vary the one. What happens if you add parameters? Let’s say you add four more parameters, ten more parameters, twenty more parameters. Is [your result] going to stay?
Basically, is it possible to generalize the controlled lab to real life, or not? If it’s not going to generalize, and adding three parameters is going to change the entire picture, then what did you learn? There is a tension in science between the controlled and real life, and you need to do both.
Hear more about Hasson’s research during tomorrow morning’s talk, “Coupled neural systems underlie the production and comprehension of naturalistic narrative speech” (9:45 in Ballroom C).
You can find even more information — and copies of publications — on the Hasson lab site.
