The specific aim of this request for resources is to examine scalability and robustness of our code on BG/P. We have confirmed that, during the flow solve phase, our CFD flow solver does exhibit perfect strong scaling to the full 32k cores on our local machine (CCNI-BG/L at RPI) but this will be our first access to BG/P. We are also eager to study the performance of the adaptive phase of our code. Some aspects have scaled well on BG/L (e.g., refinement has produced adaptive meshes that take a 17 million element mesh and perform local adaptivity on 16k cores to match a requested size field to produce a mesh exceeding 1 billion elements) but other aspects (e.g., coarsening) have proved more challenging. Finally, we also have activity in emulation and fault tolerance (a bit more below). Therefore, while most of the resources request will be used to extend the scalability of the flow solver, some portion of the resources will be used for the other aspects. While preliminary scaling numbers would be possible with a smaller allocation, we have observed that at least a few runs at large core counts for significant times are required to understand robustness and fault tolerance. As we work regularly with BG/L we don\'t expect issues with software requirements. As for sizes of planned calculations, we would like to progressively raise the processor counts in a scaling study to either the point where scaling is lost or to the largest processor count available to us. The various aspects described above involve about 5 project members. If further detail is needed, we could easily provide the full NSF PetaApps proposal and/or reports from our DOE CET activity ITAPS (of which Lori Diachin is the overall PI and Mark Shephard and Kenneth Jansen are the RPI PI\'s). PetaApps Summary: This proposed research will develop an adaptive computational fluid dynamic solver that will achieve sustained petaflop performance. A mature finit element method will be paired with anisotropic adaptive meshing procedures to provide a powerful tool for attacking fluid flow problems where boundary and shear layers develop highly anisotropic solutions that can only be located and resolved through adaptivity. These flow problems can involve complicated geometries and complex physics such as fluid turbulence and multiphase interactions, resulting in discretizations so large that only petascale simulation offers the resources required for a complete solution. To achieve this vision, we propose a research plan with four tightly-linked thrusts, 1) extension of currently parallel solver to sustained petaflop by addressing all scaling issues, 2) extension of currently parallel mesh adaptive procedures to run efficiently on petascale computers, 3) development of a petascale emulator to provide predictions of scaling issues of 1) and 2) thereby enabling them to be addressed prior to the availability of the petascale platform, and 4) demonstration petascale simulations which include, i) simulation of trailing edge noise (compressible-explicit), ii) two-phase annular flow, and iii) cardiovascular simulation (both incompressible-implicit), each yielding fundamental scientific insight into currently, poorly understood flow physics. Starting from a fully scaling code on a local ``stepping stone\'\' machine (32K processors), we propose to step through each more powerful ``stepping stone\'\' (NSF Track 2 and DOE) machine, with each of the four thrusts identifying barriers and contributing to solutions, to reach a sustained petaflop on the NSF Track 1 machine and thereby solve problems of fundamental scientific interest. Intellectual Merit Algorithms to extend an adaptive CFD solver to petaflop performance will be developed to maintain scaling in the response to emerging architectures (multicore, CELL, clockrate, bus, cache) and networks (bandwidth, latency, communication layout/mapping, asynchronous communication). This solver and others will be enabled by developments in parallel, anisotropic adaptive meshing that affords a dramatic reduction in the elements required to accurately represent anisotropic physics. Both developments will be guided by a petascale emulator that does not perturb the heap or stack memory, fits within an MPI layer and predicts performance accurately in the presence of system faults. The proposed applications will lead to a better understanding of the complex physics of turbulent flow, noise produced by turbulent flows, multiphase flows and the interaction of mechanics and biology in the human cardiovascular system. This fundamental scientific insight will be gained from high fidelity simulations that will be the first of their kind, and will be made possible by petascale resources and algorithms. Broader Impacts. A petascale adaptive CFD solver will demonstrate that robust, open source software, applicable to a very broad range of flow physics, can sustain petaflop performance, providing a path for all continuum-based PDE solvers to follow, greatly extending our nations modeling and simulation capacity. Additionally, the petascale emulator will be extended to other classes of parallel codes. The proposed petascale applications will impact critical areas of national need including energy, environment and improved patient care.