Scanning Dead Salmon in fMRI Machine Highlights Risk of Red Herrings


This was a story featured on the WIRED website. I just posted in the original title because it’s really an interesting story without needing much in the way of commentary.

The key point of the whole thing is as follows:

Neuroscientist Craig Bennett purchased a whole Atlantic salmon, took it to a lab at Dartmouth, and put it into an fMRI machine used to study the brain. The beautiful fish was to be the lab’s test object as they worked out some new methods.

So, as the fish sat in the scanner, they showed it “a series of photographs depicting human individuals in social situations.” To maintain the rigor of the protocol (and perhaps because it was hilarious), the salmon, just like a human test subject, “was asked to determine what emotion the individual in the photo must have been experiencing.”

The salmon, as Bennett’s poster on the test dryly notes, “was not alive at the time of scanning.”

If that were all that had occurred, the salmon scanning would simply live on in Dartmouth lore as a “crowning achievement in terms of ridiculous objects to scan.” But the fish had a surprise in store. When they got around to analyzing the voxel (think: 3-D or “volumetric” pixel) data, the voxels representing the area where the salmon’s tiny brain sat showed evidence of activity. In the fMRI scan, it looked like the dead salmon was actually thinking about the pictures it had been shown.

This sorta leads to a bigger question in general when doing research… Are my results real?  Sure the statistics are built in such a way to include the likelihood of chance events – “There is a 95% likelihood my results are due to real effects and only a 5% possibility they occurred due to a snowball’s chance in hell…” – but when your results do actually fall into that %5 (or less), what are the implications? Are false positives worse than false negatives (this is a more interesting question when you get into medical research)?

Check out this interesting article asking the question – what if a new test for screening terrorists was only 90% accurate? Brings some of these ideas into a new focal point…

Any thoughts?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s