Jump to content

  • Log in with Facebook Log in with Twitter Log in with Windows Live Log In with Google      Sign In   
  • Create Account

Submit your paper to J Biol Methods today!
Photo
- - - - -

Bem's Guide to write a journal article

scientific hypothesis

  • Please log in to reply
11 replies to this topic

#1 hobglobin

hobglobin

    Growing old is mandatory, growing up is optional...

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 5,531 posts
99
Excellent

Posted 04 May 2014 - 09:01 AM

Just came across about Daryl J Bem's guide "Writing the Empirical Journal Article" from 2003. It's about writing psychology paper's but anyway quite known and hosted on many university websites for download (e.g. Yale).

Anyway I listened to a feature in the radio where they mentioned a quite questionable paragraph (I set it in italics) of this text, and I wonder what you think about. I also find it quite weird and would not do this, since it sounds as if he suggests to adapt the hypotheses according to the results and as you can do what you want to get a paper with "positive" results (which is also from a statistical point of view not correct).... 

 

Here is the text:

 

"Which Article Should You Write?
There are two possible articles you can write: (i) the article you planned to write when you designed your study or (ii) the article that makes the most sense now that you have seen the results. They are rarely the same, and the correct answer is (ii).

The conventional view of the research process is that we first derive a set of hypotheses from a theory, design and conduct a study to test these hypotheses, analyze the data to see if they were confirmed or disconfirmed, and then chronicle this sequence of events in the journal article. If this is how our enterprise actually proceeded, we could  write  most of the article before we collected the data. We could write the introduction and method sections completely, prepare the results section in skeleton form, leaving spaces to be filled in by the specific numerical results, and have two possible discussion sections ready to go, one for positive results, the other for negative results.
But this is not how our enterprise actually proceeds. Psychology is more exciting than that, and the best journal articles are informed by the actual empirical findings from the opening sentence. Before writing your article, then, you need to Analyze Your Data...."


Edited by hobglobin, 04 May 2014 - 09:04 AM.
changed (a) and (b) to (i) and (ii) to avoid smilies

One must presume that long and short arguments contribute to the same end. - Epicurus
...except casandra's that belong to the funniest, most interesting and imaginative (or over-imaginative?) ones, I suppose.

That is....if she posts at all.


#2 bob1

bob1

    Thelymitra pulchella

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 5,738 posts
400
Excellent

Posted 04 May 2014 - 01:09 PM

To some extent he is correct - the results should be interpreted in the context of the larger picture - which may mean that your original hypothesis was flawed in some manner (not necessarily wrong), but in a way that means you can interpret the results in a different manner.  For example if you set out to find out if protein A affects production of X, and you find that it doesn't but it does affect something closely related that could be confused for X (isoform?)  - then you have a positive result, but it isn't what you set out to find out.



#3 phage434

phage434

    Veteran

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 2,475 posts
251
Excellent

Posted 04 May 2014 - 01:26 PM

On the other hand, if you set out to do that experiment with no clear idea of X, and discover that there is a barely detectable interaction with protein number 583 out of the 1000 that you tested, with P < 0.05, then you are chasing windmills. Sad to say, in psychology, this is a common problem. "We'll find an effect SOMEWHERE if only we look hard enough at enough possibilities." After all, you would expect to find 50 such proteins having an effect.  Perhaps you could report that as a result!!



#4 bob1

bob1

    Thelymitra pulchella

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 5,738 posts
400
Excellent

Posted 04 May 2014 - 09:24 PM

Unfortunately too true phage.



#5 pito

pito

    Veteran

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 1,332 posts
81
Excellent

Posted 05 May 2014 - 04:06 AM

On the other hand, if you set out to do that experiment with no clear idea of X, and discover that there is a barely detectable interaction with protein number 583 out of the 1000 that you tested, with P < 0.05, then you are chasing windmills. Sad to say, in psychology, this is a common problem. "We'll find an effect SOMEWHERE if only we look hard enough at enough possibilities." After all, you would expect to find 50 such proteins having an effect.  Perhaps you could report that as a result!!

I can understand that point that often your results are not what you expected and it seems logical you report about your results then. I find that quite normal to be honest, you can not really predict everything.

 

 

However, the problem here is: "psychology" .... in my opinion (often) not really a science.

Remember the dutch professor who pretty much made up 20 years of "scientific" work/papers...

 

(altough, this is happening with "real science" more and more too... 

http://www.reuters.c...E82R12P20120328

http://www.sciencema...42/6154/60.full

 

many large drug/pharmacy companies dont even care anymore about "scientific academical" publications ...


If you don't know it, then ask it! Better to ask and look foolish to some than not ask and stay stupid.


#6 hobglobin

hobglobin

    Growing old is mandatory, growing up is optional...

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 5,531 posts
99
Excellent

Posted 05 May 2014 - 07:32 AM

Yes surely it's psychology which seems to be a "grey area" between science and pseudo-science or even voodoo...

Anyway for me this sentences sounded like a fishing for the right hypothesis and if one is not supported by your data you select another one until it fits...and you avoid to have difficult to publish or unpublishable negative results. But this is not really a research plan but making post-hoc a coherent story out of your data.

It fits also with the practice that you collect so many different data that you can later ignore the not significant ones, which also produces a bias of course....


One must presume that long and short arguments contribute to the same end. - Epicurus
...except casandra's that belong to the funniest, most interesting and imaginative (or over-imaginative?) ones, I suppose.

That is....if she posts at all.


#7 Tabaluga

Tabaluga

    Making glass out of shards

  • Active Members
  • PipPipPipPipPipPipPipPipPipPip
  • 394 posts
49
Excellent

Posted 07 May 2014 - 12:31 PM

I agree with Bob's first post and also with your concerns. But isn't it common to do that ? Even well-known scientists admit that (I don't know if someone ever said this on record but for instance Elizabeth Blackburn once gave a talk at our institute and upon a question from the audience said that when you read a paper it's normal that the proceeding presented there is not the actual order of methods applied but rather what has been deemed to be the logical proceeding to get these results and prove this point (i.e. the story that makes the most sense and is most elegant)). That doesn't mean it's made up, but that the order of experiments was constructed in retrospective.

In my lab I often heard stuff like "you have got to make a logical story out of it" "it must have a linear thread" so I thought this is normal ? Nobody would accept a paper that states chronologically "we had no clue in the first place so we poked here and there and 90 % of our exps failed anyway so here is what we got" (a bit exaggerated but basically it's like that)

Sure that this isn't the ideal practice, but isn't this something more acceptable than fishing for significants (which I don't think Bem meant in the first place; I think he meant sth along the lines of what I just said) ?

 

EDIT: I just read pitos links. Appalling. Reminds me of how these random nonsensical papers that you can create somewhere on the web were actually accepted in some small journals..wacko.png


Edited by Tabaluga, 07 May 2014 - 12:46 PM.

Il dort. Quoique le sort fût pour lui bien étrange,
Il vivait. Il mourut quand il n'eut plus son ange;
La chose simplement d'elle-même arriva,
Comme la nuit se fait lorsque le jour s'en va.

 


#8 phage434

phage434

    Veteran

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 2,475 posts
251
Excellent

Posted 07 May 2014 - 12:40 PM

There's nothing wrong with discovering a barely detectable effect in an unexpected place in a different experiment. But the next step is to test that result multiple times and make absolutely certain that it is a real effect. Ideally, you design a different experiment testing the same hypothesis a different way. The problem comes when that initial result is immediately reported (often to the institute press office, not in a journal publication).



#9 pito

pito

    Veteran

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 1,332 posts
81
Excellent

Posted 08 May 2014 - 08:44 AM

Of course, there is nothing wrong with what you describe, actually its pretty normal !

And most of the "amazing discoveries" were made by "accident"!

But its the way how its done that is often a problem and especially in the field of psychology there are some weird papers.

 

BTW to make it a bit extremer: if you design an experiment based on what you expect (and "how it should be") you are often very biased too! So that approach is also often not good!

 

I agree with Bob's first post and also with your concerns. But isn't it common to do that ? Even well-known scientists admit that (I don't know if someone ever said this on record but for instance Elizabeth Blackburn once gave a talk at our institute and upon a question from the audience said that when you read a paper it's normal that the proceeding presented there is not the actual order of methods applied but rather what has been deemed to be the logical proceeding to get these results and prove this point (i.e. the story that makes the most sense and is most elegant)). That doesn't mean it's made up, but that the order of experiments was constructed in retrospective.

In my lab I often heard stuff like "you have got to make a logical story out of it" "it must have a linear thread" so I thought this is normal ? Nobody would accept a paper that states chronologically "we had no clue in the first place so we poked here and there and 90 % of our exps failed anyway so here is what we got" (a bit exaggerated but basically it's like that)

Sure that this isn't the ideal practice, but isn't this something more acceptable than fishing for significants (which I don't think Bem meant in the first place; I think he meant sth along the lines of what I just said) ?

 

EDIT: I just read pitos links. Appalling. Reminds me of how these random nonsensical papers that you can create somewhere on the web were actually accepted in some small journals..wacko.png

 


If you don't know it, then ask it! Better to ask and look foolish to some than not ask and stay stupid.


#10 hobglobin

hobglobin

    Growing old is mandatory, growing up is optional...

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 5,531 posts
99
Excellent

Posted 08 May 2014 - 10:12 AM

IMO this accidental findings can happen but you should not use it as a general working method (e.g. leaving my used petri dishes in a dirty hood and hopefully somewhen a new antibiotics producing fungus is growing...).

And especially in the beginning it might be sometimes a necessary and/or fruitful way to have a flawed hypothesis or a very broad one, but later you should understand the system enough to have better hypotheses that don't need to be adjusted to reality frequently...

And I think all branches of science have such problems but might be easier to find in natural sciences (using statistics to find data anomalies or just trying to repeat the experiment such as in the cancer works).

But not always (see e.g. here), and very complicated and specialised science such as quantum physics or mathematics seems sometimes similar prone to fraud problems as the social sciences (a classic is the Sokal affair) with its very special and difficult sounding scientific jargon wink.png :

 

academic-gibberish.jpg


One must presume that long and short arguments contribute to the same end. - Epicurus
...except casandra's that belong to the funniest, most interesting and imaginative (or over-imaginative?) ones, I suppose.

That is....if she posts at all.


#11 Tabaluga

Tabaluga

    Making glass out of shards

  • Active Members
  • PipPipPipPipPipPipPipPipPipPip
  • 394 posts
49
Excellent

Posted 08 May 2014 - 10:34 AM

Okay, I also thought that generally this was somehow "normal", even if it's not ideal of course.  As for the accidental findings, of course they shouldn't be standard...(I recall this other thread were someone actually said that he'd want to use some chemicals possibly gone bad (I don't remember what exactly) and it could promote an accidental finding which is of course completely nonsense...) But when your project is "find out the function of protein x in these particular cells", which my project actually was, it's necessary to start out broad and you have to poke around a bit at first, in the hope of narrowing it down....

There clearly is a great discrepancy between how it should be done and how it is done. BTW I think most of you will know this famous joke about research paper phrases and what they mean: http://sciencebasedl...86tv1qbh26i.jpg


Il dort. Quoique le sort fût pour lui bien étrange,
Il vivait. Il mourut quand il n'eut plus son ange;
La chose simplement d'elle-même arriva,
Comme la nuit se fait lorsque le jour s'en va.

 


#12 pito

pito

    Veteran

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 1,332 posts
81
Excellent

Posted 08 May 2014 - 10:41 AM

Of course, what you state is true.

I was more speaking of small things (perhaps I phrased it in a bad way by stating "amazing discoveries").

 

A simplified example: imagine you study protein X and Y (are supposed to interact) but you fnd that protein X interacts with protein Z and not protein Y ...

 

 

IMO this accidental findings can happen but you should not use it as a general working method (e.g. leaving my used petri dishes in a dirty hood and hopefully somewhen a new antibiotics producing fungus is growing...).

And especially in the beginning it might be sometimes a necessary and/or fruitful way to have a flawed hypothesis or a very broad one, but later you should understand the system enough to have better hypotheses that don't need to be adjusted to reality frequently...

And I think all branches of science have such problems but might be easier to find in natural sciences (using statistics to find data anomalies or just trying to repeat the experiment such as in the cancer works).

But not always (see e.g. here), and very complicated and specialised science such as quantum physics or mathematics seems sometimes similar prone to fraud problems as the social sciences (a classic is the Sokal affair) with its very special and difficult sounding scientific jargon wink.png :

 

academic-gibberish.jpg

 


If you don't know it, then ask it! Better to ask and look foolish to some than not ask and stay stupid.





Home - About - Terms of Service - Privacy - Contact Us

©1999-2013 Protocol Online, All rights reserved.