Good question! You are thinking critically about a "peer reviewed journal article," and not
just accepting it by authority. The answer is yes and no. It means only that noise may well be clouding our vision
to detect the real effect of pheromones that may be there. So yes, there's noise in the study that is interfering
in things. It doesn't look like they've found the way to filter it out, from the article. So even though this is
one of the most interesting reseachers studying pheromones, you still have to take it with a couple shakes of salt,
and the overall taste is not satisfying.
In particular, it means that the process of rating any trait in the
specific way that they did it, even an arbitrary one, might have influenced pheromone condition ratings a
certain amount, or vice versa. This is what I mean by "method effect". Another, more intuitive way of looking
at it is that the pheromones could influence the rating of any trait, even a neutral, random one. You're ready to
endorse anything at that point -- it could be "parrot" or "swiss cheese"! "Hey! Androsterone made subjects
rate these pictures as more "parrot"!" Either way, that amount, related to the 0.05 they quoted, is a first estimate
of the method effect. So the results you can take more seriously at this point, without more sophisticated
analysis or further information, are the ones with a smaller p value than 0.05, like 0.001 (the effect of ovulatory
phase).
Anyway, if you'd rate "black" by another method, you would see another method effect estimate, and the
more methods you use, the closer you'd get to seeing the theoretical "actual method effect" for that particular
procedure. But to do it right, you'd also want to measure more than one "arbitrary trait" in this case, like
"green" and "Portugese" or something (even parrot or swiss cheese!). Ideally, you'd get random adjectives from the
dictionary and keep pulling them out of a hat until you found some that appear to be "most neutral and arbitrary".
Three would be enough, realistically speaking.
But it's not even that simple. Since endorsing things is like
saying yes, and since saying yes to something is almost surely part of one's attraction to that thing, you are
presented with a problem in measuring the true method effect independent of your effect of interest (e.g.,
attracton). To get around this, you'd want to measure endorsing the same neutral thing in a "No" kind of way, like
"not black" or something. You could also leave the category optional and compare how many chose to rate it versus
not under the pheromone condition; and then compare that with the "endorsement effect" to get at the true method
effect. Research design isn't easy! I'd have to think about it more. But this is a cutting edge way of looking at
it that too few researchers are into as of yet. You are able to get much bigger effect sizes doing things this way,
and to get much more stunning results.
This kind of thing calibrates the measurement process for you. Research
on humans needs to be pretty damn sophisticated to be able to detect the kinds of complicated and subtle causal
forces you are studying. But once you get it aimed and focused, you can have a powerful and sensitive lens for
looking at human nature.
Until we start doing things like this in research, (and this just scratches the
surface) we'll have very little idea, scientifically speaking, how much pheromones effect anything! So don't
listen to anyone who tells you they already know. All human science researchers should do things like this in
every study to increase their construct validity. Do they? Too, too rarely. I know of no one in pheromone
research who does, as the field is not really attracting the most skilled research designers (I'll leave it to your
imagination whether or not I happen to fit into that category.). But these are the types of things Maiworm should
be thinking about in her office when putting studies together. And to expect biologist-types to know this stuff
without advanced training in psychological research (which I'm providing here, free of charge )? Forget it.
Bookmarks