Panthera: holistic memory management for big data processing over hybrid memories
Date
2019Author
Wang, ChenxiCui, Huimin
Cao, Ting
Zigman, John
Volos, Haris
Mutlu, Onur
Lv, Fang
Feng, Xiaobing
Xu, Guoqing Harry
ISBN
978-1-4503-6712-7Publisher
Association for Computing MachineryPlace of publication
Phoenix, AZ, USASource
Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and ImplementationGoogle Scholar check
Metadata
Show full item recordAbstract
Modern data-parallel systems such as Spark rely increasingly on in-memory computing that can significantly improve the efficiency of iterative algorithms. To process real-world datasets, modern data-parallel systems often require extremely large amounts of memory, which are both costly and energy-inefficient. Emerging non-volatile memory (NVM) technologies offers high capacity compared to DRAM and low energy compared to SSDs. Hence, NVMs have the potential to fundamentally change the dichotomy between DRAM and durable storage in Big Data processing. However, most Big Data applications are written in managed languages (e.g., Scala and Java) and executed on top of a managed runtime (e.g., the Java Virtual Machine) that already performs various dimensions of memory management. Supporting hybrid physical memories adds in a new dimension, creating unique challenges in data replacement and migration. This paper proposes Panthera, a semantics-aware, fully automated memory management technique for Big Data processing over hybrid memories. Panthera analyzes user programs on a Big Data system to infer their coarse-grained access patterns, which are then passed down to the Panthera runtime for efficient data placement and migration. For Big Data applications, the coarse-grained data division is accurate enough to guide GC for data layout, which hardly incurs data monitoring and moving overhead. We have implemented Panthera in OpenJDK and Apache Spark. An extensive evaluation with various datasets and applications demonstrates that Panthera reduces energy by 32 – 52% at only a 1 – 9% execution time overhead.