Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.


Memory size has long limited large-scale applications on high-performance computing (HPC) systems. Since compute nodes frequently do not have swap space, physical memory often limits problem sizes. Increasing core counts per chip and power density constraints, which limit the number of DIMMs per node, have exacerbated this problem. Further, DRAM constitutes a signi?cant portion of overall HPC system cost. Therefore, instead of adding more DRAM to the nodes, mechanisms to manage memory usage more ef?ciently —preferably transparently— could increase effective DRAM capacity and thus the bene?t of multicore nodes for HPC systems. MPI application processes often exhibit signi?cant data similarity. These data regions occupy multiple physical locations across the individual rank processes within a multicore node and thus offer a potential savings in memory capacity. These regions, primarily residing in heap, are dynamic, which makes them dif?cult to manage statically. Our novel memory allocation library, SBLLmallocShort, automatically identi?es identical memory blocks and merges them into a single copy. Our implementation is transparent to the application and does not require any kernel modi?cations. Overall, we demonstrate that SBLLmalloc reduces the memory footprint of a range of MPI applications by 32:03% on average and up to 60:87%. Further, SBLLmalloc supports problem sizes for IRS over 21:36% larger than using standard memory management techniques, thus signi?cantly increasing effective system size. Similarly, SBLLmalloc requires 43:75% fewer nodes than standard memory management techniques to solve an AMG problem.

Questions and Answers

You need to be logged in to be able to post here.