Ok so this is...not exactly accurate.
They ran the fMRI testing on the dead salmon to test the MRI machine before doing an experiment looking at humans' response to social stimuli. They wanted something "with good contrast, but also with several clearly defined and distinguishable types of tissue..." (1), so they got a fresh salmon from the grocery store. Having tested the machine, they laughed at the absurdity, set aside the results, and did their regular study. Years later, one of the authors was doing a seminar on analyzing fMRI data and used the salmon data they already had as a goofy example of improper analysis, and they found something interesting!
So when you do fMRI imaging, you get a lot of data. Like, 40-130k voxels (small cubes of your brain, like a 3D version of a pixel on a screen) showing changes in blood oxygen levels (which does seem to be a good indicator of mental activity, based on research like optogenetics looking at whether blood oxygen levels indicate brain activity*). (2, 3) From all over the brain, because the rest of your brain is still active and doing different things while the subject is answering your question or looking at the image you're showing them or doing whatever task you've asked them to do.
So now you have this fuckton of data, what now? Well, now you look at the information you got from each of the voxels from the duration of the scan, and you compare the data you got from each voxel at different times to see which ones are "activated," where those voxels are (are they clustered together or spread out randomly?), and what physical activity or stimuli it's correlated to (i.e. which question was asked or image was shown or task was done).
Figuring out what qualifies as a significant change in blood oxygen levels for each voxel is done using "a fair bit of stats" because it's a shit load of data, and picking out significant changes can come down to a fraction of a percent change. So now you're doing thousands of comparisons between voxels, and you can run into a well known issue called the "multiple comparisons problem," which basically means that if you run a lot of tests, some of them will come out positive even if they're not (i.e. false positives). They really fuck up research and are a problem when you're trying to figure out, say, "new MRI protocols to use with adolescents and adults" (4), so you have to correct for that when doing the "fair bit of stats" to make sure you aren't reporting on the false positives.
What "The Dead Salmon" study found was that among all the false positives (which is to be expected, as "just about any volume with 65,000 voxels is going to have some false positives with uncorrected statistics" (4)) there were 3 activated voxels that just happened to be clustered together right in the middle of the salmon's brain, which gave the appearance of brain activity in a dead fish because they hadn't used the proper corrections in their statistical model.
That's what led to it being an actual story instead of a fun anecdote, and it's why they submitted it to the conference. It was all to show support that the minority of researchers at the time who reported their statistics from fMRIs without correcting them to should stop doing that, but then neuroscience blogs discovered the poster and wrote about it, which led to it going viral (before the authors could publish their commentary on it) and being awarded an Ig Nobel Prize in 2012.
Ultimately, it "doesn’t add anything to the technical discussion of how multiple comparisons correction is performed, it is simply a salient reminder of why proper correction is always necessary" (5), and it didn't invalidate that many previous studies or the entire field of neuropsychology (although it going viral did seem to have the desired effect, since by the time they got the Ig Nobel Prize 3 years after the fact only 10% of papers in the field of fMRI reported no multiple comparisons correction where before it was closer to 25-40%). It provided a goofy example for why it's important that researchers do the proper statistics to analyze fMRI data that has been widely used since the 90s and which most of them were already doing. (6)
Since tumblr hates reblogs with links I will reblog again with sources and additional information.
*The difficulty with using brain imaging to make any kind of claim about the way humans think or act or interact with others is more in how the data is interpreted, since brains are complex and several parts of the brain can be involved in a single thought or action, and a whole host of other things that can complicate it that would be an entirely different post. Typically tho, the "men think of women as objects because the same part of the brain is activated by women as by tools" type of reporting comes from journalists who are trying to interpret scientific papers that might say something more like "images of women, tools, and elephants elicited activity in the same areas of the brain in men...this may indicate a commonality between these images, but determining what that commonality might be is beyond the scope of the current study and further research is needed," the journalists just picked out the part that would be most sensational so people would read their story.