Monday, 6 October 2008: 2:45 PM
George R. Brown Convention Center, 362AB
A three-dimensional flow and transport model was developed to simulate the results of a laboratory-scale experiment in which snapshots of concentration were obtained using magnetic resonance imaging (MRI) during the displacement of tracer through a 14 by 8 by 8 cm flow cell. The medium was deliberately constructed to be heterogeneous with a known spatial correlation structure using sand of five different grain-size distributions. The extremely well characterized flow cell and large, high-precision data set of concentrations during displacement make this a unique experiment for examining the validity of flow and transport models, and for exploring new methods for interpreting large data sets using advanced optimization algorithms. A transport model was constructed by solving the steady state flow equations using the Finite Element Heat and Mass (FEHM) code, using FEHM's particle tracking transport model for simulating tracer migration. The particle tracking model was selected so that precise estimates of the transport parameters could be obtained that are not corrupted by numerical dispersion; a large number of particles (typically one million) were required to provide accuracy. The inverse model included nine uncertain parameters, the five permeability values of the individual sand units, and four dispersion/diffusion parameters. The inverse problem was solved with AMALGAM, a recently developed self-adaptive multimethod optimization algorithm. The computations were enabled by performing both the transport model and the optimization loop on a high-performance cluster using a 100 different computational nodes. Our results show that parameter estimates and increased understanding of the behavior of the system can be obtained, and significant improvements in the fit to the data over hand calibration can be achieved, using this inverse modeling approach. The study also illustrates that numerical methods that make effective use of high- performance computing resources and advanced optimization algorithms are crucial in enabling the interpretation of very large data sets using large-scale, distributed parameter models.