From Monkeys to Humans

A few days ago, neuroscientist Tony Movshon and his team released a paper on the role of visual brain area “V2”, the function of which has been quite enigmatic for vision scientists. As a grad student who studies vision, I thought it was a cool paper, but as a person interested in the science/society interface and the process of science, I think it’s an interesting look into the future of science. Here’s why.

The paper starts out like one would expect from a visual neuroscience publication: let’s stick an electrode into a monkey, show it a bunch of different stimuli, and see what happens. Turns out, monkey V2, but not V1, responds preferentially to “texture” stimuli, a set of images that the researchers created from natural images. (*gasp* they probably found this out by accident, and/or have invested a ton of time in finding the exact set of stimuli that happens to activate V2 and not V1… but alas, there it is).

Then, they do something that some, but not all vision papers do: repeat the same thing with humans. Of course, we can’t stick an electrode into a human brain, so they do this with functional MRI. They find, not surprisingly, that human V2 also responds to patterned textures. In addition, even higher visual areas, V3 and V4, seem to care about texture. And when they asked people to decide which textures were more natural with a three-alternative choice task, they found that perceptual sensitivity was correlated with neural modulation – in other words, if you are able to distinguish between stimuli, so are your neurons.

Crowd-sourced psychophysical estimates of
sensitivity for hundreds of texture families.

However, these researcher’s weren’t done here. They had only tested 15 different “texture” families, and wanted to really get to the bottom of what exactly it was about the texture that was eliciting a response. So, they generated almost 500 texture families, and sent their study out into the internet abyss. Using Amazon’s Mechanical Turk service, which actually pays users a few pennies to complete “Human Intelligence Tests,” they collected over 300 hours of data that asked “Which of these stimuli are different?” to get a “crowd-estimate” of perceptual sensitivity. Then, they took the stimuli that were the least or more perceptually salient to the crowd, and tested only those sets of textures in both monkeys and humans. With all the data on 500 different texture families and a host of statistical techniques, they were able to figure out exactly which visual parameters of the images were indicative of perceptual sensitivity, which ultimately point of this whole thing.

These final pieces to the puzzle – the fact that the modulation of the image is dependent on specific features of the texture, and that this modulation maps onto how sensitive the viewer is to the differences in the texture – were made possible by crowd-sourcing. Many groups, including Sebastian Seung’s EyeWire game, rely on user input to deal with masses of data. Not surprisingly, more and more researchers are recognizing the power of turning to the crowd, instead of relying on a few subjects (something that has been the bread and butter of sociology and psychology for years). The Human Connectome project relies on multiple university groups to collect data, which is culled into one location for analysis. As Obama’s Brain Initiative takes off, it might find more ways to deal with big data while bored internet users make enough money for a soda (after about 3 hours of work). Multi-level science is inevitably the future, even for the most basic experimental questions.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s