Bottom Line:
We found that these sections of code could be restructured and parallelized to improve efficiency.The fact that the website has continued to work well in "real-world" tests, and receives a considerable number of new citations provides the strongest testimony to the effectiveness of our improvements.However, we soon expect the need to upgrade the number of nodes in our cluster significantly as dataset sizes continue to expand.

Background: The Isolation by Distance Web Service (IBDWS) is a user-friendly web interface for analyzing patterns of isolation by distance in population genetic data. IBDWS enables researchers to perform a variety of statistical tests such as Mantel tests and reduced major axis regression (RMA), and returns vector based graphs. The more than 60 citations since 2005 confirm the popularity and utility of this website. Despite its usefulness, the data sets with over 65 populations can take hours or days to complete due to the computational intensity of the statistical tests. This is especially troublesome for web-based software analysis, since users tend to expect real-time results on the order of seconds, or at most, minutes. Moreover, as genetic data continue to increase and diversify, so does the demand for more processing power. In order to increase the speed and efficiency of IBDWS, we first determined which aspects of the code were most time consuming and whether they might be amenable to improvements by parallelization or algorithmic optimization.

Results: Runtime tests uncovered two areas of IBDWS that consumed significant amounts of time: randomizations within the Mantel test and the RMA calculations. We found that these sections of code could be restructured and parallelized to improve efficiency. The code was first optimized by combining two similar randomization routines, implementing a Fisher-Yates shuffling algorithm, and then parallelizing those routines. Tests of the parallelization and Fisher-Yates algorithmic improvements were performed on a variety of data sets ranging from 10 to 150 populations. All tested algorithms showed runtime reductions and a very close fit to the predicted speedups based on time-complexity calculations. In the case of 150 populations with 10,000 randomizations, data were analyzed 23 times faster.

Conclusion: Since the implementation of the new algorithms in late 2007, datasets have continued to increase substantially in size and many exceed the largest population sizes we used in our test sets. The fact that the website has continued to work well in "real-world" tests, and receives a considerable number of new citations provides the strongest testimony to the effectiveness of our improvements. However, we soon expect the need to upgrade the number of nodes in our cluster significantly as dataset sizes continue to expand. The parallel implementation can be found at http://ibdws.sdsu.edu/.

Figure 7: Effect of parallelization by population size. Time complexity improvement of parallelization with test population sizes greater than 30 (10,000 randomizations).

Mentions:
One of the goals of timing the parallel and serial code was to determine at what point the analyses benefited from parallelization. Timing tests revealed that data sets with greater than 30 populations and more than 100 randomizations would benefit from parallelization, given the CPU speed (data not shown). Once a cutoff was set, the final configuration was determined with seven PEs configured as previously described. We compared this setup to the serial program and found that the time savings becomes more dramatic with larger population sizes (Figure 7, 8). (With regards to the user interface, the switch between one processor and multiple processors is completely transparent.)

Figure 7: Effect of parallelization by population size. Time complexity improvement of parallelization with test population sizes greater than 30 (10,000 randomizations).

Mentions:
One of the goals of timing the parallel and serial code was to determine at what point the analyses benefited from parallelization. Timing tests revealed that data sets with greater than 30 populations and more than 100 randomizations would benefit from parallelization, given the CPU speed (data not shown). Once a cutoff was set, the final configuration was determined with seven PEs configured as previously described. We compared this setup to the serial program and found that the time savings becomes more dramatic with larger population sizes (Figure 7, 8). (With regards to the user interface, the switch between one processor and multiple processors is completely transparent.)

Bottom Line:
We found that these sections of code could be restructured and parallelized to improve efficiency.The fact that the website has continued to work well in "real-world" tests, and receives a considerable number of new citations provides the strongest testimony to the effectiveness of our improvements.However, we soon expect the need to upgrade the number of nodes in our cluster significantly as dataset sizes continue to expand.

Background: The Isolation by Distance Web Service (IBDWS) is a user-friendly web interface for analyzing patterns of isolation by distance in population genetic data. IBDWS enables researchers to perform a variety of statistical tests such as Mantel tests and reduced major axis regression (RMA), and returns vector based graphs. The more than 60 citations since 2005 confirm the popularity and utility of this website. Despite its usefulness, the data sets with over 65 populations can take hours or days to complete due to the computational intensity of the statistical tests. This is especially troublesome for web-based software analysis, since users tend to expect real-time results on the order of seconds, or at most, minutes. Moreover, as genetic data continue to increase and diversify, so does the demand for more processing power. In order to increase the speed and efficiency of IBDWS, we first determined which aspects of the code were most time consuming and whether they might be amenable to improvements by parallelization or algorithmic optimization.

Results: Runtime tests uncovered two areas of IBDWS that consumed significant amounts of time: randomizations within the Mantel test and the RMA calculations. We found that these sections of code could be restructured and parallelized to improve efficiency. The code was first optimized by combining two similar randomization routines, implementing a Fisher-Yates shuffling algorithm, and then parallelizing those routines. Tests of the parallelization and Fisher-Yates algorithmic improvements were performed on a variety of data sets ranging from 10 to 150 populations. All tested algorithms showed runtime reductions and a very close fit to the predicted speedups based on time-complexity calculations. In the case of 150 populations with 10,000 randomizations, data were analyzed 23 times faster.

Conclusion: Since the implementation of the new algorithms in late 2007, datasets have continued to increase substantially in size and many exceed the largest population sizes we used in our test sets. The fact that the website has continued to work well in "real-world" tests, and receives a considerable number of new citations provides the strongest testimony to the effectiveness of our improvements. However, we soon expect the need to upgrade the number of nodes in our cluster significantly as dataset sizes continue to expand. The parallel implementation can be found at http://ibdws.sdsu.edu/.