Sonication Vs MNase
Posted 25 August 2011 - 03:25 PM
You ask two very good questions. My impression from the buffer in the Cell Signaling kit is that it is a pretty standard lysis buffer with a detergent, most likely SDS (judging on the consistency and appearance of the buffer). Beyond that of course, I don't know the exact details of the solution. However, note that this protocol uses a second nuclear lysis/wash. The initial cell lysis buffer is spun out and the nuclei are lysed and washed in buffer B (+ DTT) and the enzyme digestion is then performed in this same buffer B. So the composition of buffer B is the most important for successful digestion with MNase. I would guess that buffer B is a standard MNase reaction buffer.
Now to your second point. I actually agree with you 100% in that it might be beneficial in some cases to allow the digestion to go all the way to single nucleosomes, at least theoretically in ChIP-Seq. But you answered the question really because the issue with that is that it becomes impossible to validate your library prior to running the sample on the machine. It also becomes more complicated when you want to compare ChIP signals to ChIP-Seq signals because for standard ChIP there is no way you should digest all the way. I know from experience that an even slightly over-digested chromatin will give dramatically different results to a well prepared sample. So you would have to have two different protocols and two different sizes of chromatin for ChIP and ChIP-Seq and I'm not sure that is ideal.
Hope this helps a little.
Posted 26 August 2011 - 01:09 AM
my major concern regarding MNase-ChIP is the chromatin release. are you sure you get a representative fraction of chromatin released from the nucleus? did you sequence your input chromatin, how evenly do the reads distribute?
according to my expts a huge part of chromatin remains in the nuclei after digestion with the released one being mainly chopped down to mono/di-nucleosomes.
i would be really interested in trying the MNase approach but without a proper release this technique is useless.
The answer to your question is YES I did sequence the input chromatin and the reads distribute pretty nicely. In fact, the results looked a whole lot better than with sonication. That is for certain. Actually when you compare my data to other published studies, my results are VERY consistent. Actually the paper is in press now so obviously the reviewers didn't have a problem with my technique.
As I stated in my original post, the reason that MNase has gotten less attention is because of people like you following the crowd. If you digest down to mono-nucleosomes, then that is what you will get. However, if you do the proper experiments before hand and ensure that this doesn't happen, I personally believe that you get a good chromatin prep - or "proper release" as you call it.
How do you know you get "proper release" when you blast the crap out of your DNA-protein complexes with a sonicator? How do you know you don't destroy epitopes with sonication?
Easy, easy! It's great that you defend your technique with such enthusiasm but watch out before getting too personal. You don't know me and you don't know my experience. I did a lot of MNAse release experiments in higher eukaryotes and I observed - as many other of my colleagues - that active chromatin releases much earlier than inactive. same is for sonication just a bit less biased.
how do you estimate even read distribution, did you compare active genes versus inactive genes? what is your average DNA fragment size before Seq library preparation, what is your avg fragment size after sequencing?
Posted 01 September 2011 - 11:56 AM
Anyway, on to your questions. No, I have not looked at active genes vs. inactive genes, but it would be a nice experiment. I am not sure how you would know 100% which genes are "active". I guess you could use a co-IP with a polymerase or something and I have seen that published, but I'm not sure that is ideal. The average size before library prep for me was around 350 - 450 bp, but I still size selected in the normal range (around 250 bp) for sequencing. Of course, this leads to some bias, but that is an issue inherent in size selecting the library for sequencing, regardless of whether you use sonication vs. MNase. It is also a trade off with the qRT-PCR analysis. As was mentioned earlier, I don't think it would be unreasonable to digest down to single nucleosomes for sequencing only, but this would make library validation etc very difficult.
Average fragment size after sequencing? My read length was 40 bp if that is what you mean.
Just as a side note, it is important to point out that I have ZERO issues with sonication and it is a perfectly good technique. My goal is to simply alert people to the alternative.
Posted 04 September 2011 - 03:08 AM
so let's get back to science:
- active genes could also be defined using other peoples data in case you work in a well established model
- you say you have 350-400 bp before size selection, does that mean you have mainly di/trinucleosomes? wow. i would like to do be able to do that as well! what are your tuning parameters for digestion?
- still you do a size selection, that's imho problematic as you favor easily released mononucleosomes. why not apply another MNase step after IP to chop them all down to mononucleosomes?
- fragment size after sequencing is the average fragment size computationally determined using forward/reverse strand mapping correlation or similar approaches. this size is usually much smaller than the 250 bp you selected in the first place (solexa sequencers prefer shorter frags). which is again problematic. as you generate dinucleosomes first, then select DNA smaller than dinucleosomes, and the sequencer preferentially sequences the shorter fragments of those. of course this also applies to sonication, kind of. using covaris shearing you however get very homogenous 150-200 bp chromatin. at least you are only left with one biased step.
I am not happy with sonication, not at all (time consuming, not robust, biases). and I really would like to have a robust alternative
Edited by nanook, 04 September 2011 - 03:10 AM.
Posted 04 September 2011 - 07:16 AM
1) Active genes could also be defined using other peoples data in case you work in a well established model
This is something that a few of us in my area are looking to do on a collaborative basis
2) You say you have 350-400 bp before size selection, does that mean you have mainly di/trinucleosomes? wow. i would like to do be able to do that as well! what are your tuning parameters for digestion?
I have a roughly even distibution of mono/di/tri nucleosomes, from 150 bp to around 900 bp. The average is somehere around 400 bp. I am not in the lab right now, but will post a bioanalyzer pic later. It is not easy to get this pattern and it takes a lot of time. You get the sense that the chromatin wants to just collapse in the presence of MNase and you really have to optimize the conditions. The most important factor is the concentration of enzyme to number of cells used to prepare the chromatin.. If you cannot consistently get an accurate cell count prior to MNase digestion, you will struggle. I know this is stating the obvious, but with MNase if you are off by 0.2 ul, you will digest the chromatin to mono's exclusively. The type of cells also heavily determines the concentration of MNase required.
3) Still you do a size selection, that's imho problematic as you favor easily released mononucleosomes. why not apply another MNase step after IP to chop them all down to mononucleosomes?
I agree, any time you do a size selection there is a bias. As was discussed earlier in this post, I think digesting all the way to mono's is a nice idea if purely for sequencing and your suggestion to do a second digest after ChIP is not a bad one at all. The only problem was that when I did digest to mono's only, there was a huge loss in DNA, proably because of purifiation issues, so that would need to be overcome. But, nevertheless, a good suggestion.
4) ChIP peak size following my analysis was around 100 - 250 bp on average, if that answers your question. The peaks were much tighter compared to sonication. I accept your point about bias here too. .
What is your current method/problems?
Posted 09 September 2011 - 04:39 AM
sonication has always been a hassle. in particular when trying to generate small fragments as needed for sequencing. it takes a lot of cycles to chop chromatin to 200 bp. and the sonicator's drop in performance after some 100 hours of operation is very annyoing. in addition parallel processing is limited and you never know if at a certain energy input also proteins start to break, epitopes get lost.