We like to let our readers know that, on Friday evening, we called for worldwide ban on NGS experiments through Twitter.
The responses had been overwhelming (see below), and the only serious question we received was on how to implement such a ban. As you know, we are never short of ideas, and here are three possibilities we propose.
Option 1. NGS Ban Task Force:
We can set up a task force like IAEA, which will go around the world and make surprise inspections of labs to make sure they are not conducting any more NGS experiment. Violators will receive severe sanctions from our peace-loving Pre sident.
Option 2. occupy NGS Movement:
We will set tents inside the labs of anyone doing NGS experiment, and read math-heavy bioinformatics papers to the principle investigators of the labs.
Option 3. Do nothing and let nature take its course:
In a set of slides, CTB presented the scale of data problem in biology and explained that there are two types of scientists - (i) those, whose data are growing faster than their ability to analyze them, and (ii) those, who designed effective tools to handle exponentially rising data.
The second group possibly has fewer than fifteen members in the world, and is (numerically) insignificant for our discussion. The first group has many more members contributing to the data deluge. However, they do not know that they already lost the battle, and acquiring more data is akin to digging themselves into a bigger hole.
We believe this third option (‘do nothing’) will be most effective in imposing world-wide ban on NGS !!
Jokes aside, our blog has been aware of the exponential data problem from day one, and always actively looked for and presented scalable algorithms long before they were published. Few examples -
However, the above algorithms only address short read nucleotide analysis, which is one aspect of data deluge. Availability of more data needs to be incorporated into every higher level of analysis as well. For example, if a comparative analysis was done with five insect genomes in 2008, that analysis may reveal new information with 50 genomes in 2012 and 500 genomes in 2015. The algorithm for doing three insect analysis may not be scalable enough for 500 insect analysis, and to make matters worse, non-technical issues such as download speed, availability of sequences from 200 different sequencing labs around the globe in 200 different forms, etc. are all going to contribute to the problem.
We look forward to hearing from our readers, what they see as potential solutions to the various problems related to exponentially increasing number of assembled genomes being available world-wide. Please feel free to suggest useful scalable algorithms not covered by us previously.