Multiple simulation runs (sometimes several thousand) are required to compute sound statistics for global sensitivity analysis. Current practice consists of running all the necessary instances with different set of input parameters, store the results to disk, often called ensemble data, to later reading them back from disk to compute the required statistics. The amount of storage needed may quickly become overwhelming, with the associated long read time that makes statistical computing time consuming. To avoid this pitfall, scientists reduce their study size by running low resolution simulations or down-sampling output data in space and time. Today petascale and tomorrow exascale machines offer compute capabilities that would enable large scale sensitivity studies. But they are unfortunately not feasible due to this storage issue. In this talk we will explore this problem and discuss novel approaches that may be used in the future.
BIO
From 2009, Alejandro Ribes holds a permanent position at the Research & Development Department of EDF, a major European electric power company. He is responsible for an activity involving the visualization of complex and large industrial data, normally issued from numerical simulations of physical processes. He also collaborates with Université Pierre et Marie-Curie (Paris) by lecturing on opto-electronics.