See more from this Session: Symposium--Bioinformatics for Crop Improvement: Assay Design and Applications
Tuesday, October 18, 2011: 1:05 PM
Henry Gonzalez Convention Center, Room 209
It might seem that more is better, especially when we are talking data. However, careful attention to every step of data inspection, management, analysis and interpretation are required when working with today's massive high throughput experiments. The data generated by these experiments are still expensive, but the costs are dropping and their use is increasing across the biological spectrum. We will examine the steps and mis-steps for eQTL analysis of microarray data in an intercross. During the talk, we will look at how to uncover and correct raw data issues, map thousands of traits, identify and confirm hotspots, extract (hopefully) useful subsets or modules of highly correlated traits that co-map, and build causal networks among traits in such subsets. In addition to working with the data from one study, we will explore ways to incorporate previously gleaned biological information on pathways and function into eQTL analysis. These investigations eat up tremendous computing resources. Therefore, we will briefly examine how high throughput computing platforms such as Condor can be used effectively to get the job done in days rather than months or years.
See more from this Division: ASA Section: Biometry and Statistical ComputingSee more from this Session: Symposium--Bioinformatics for Crop Improvement: Assay Design and Applications