In-memory indexed caching for distributed data processing
Powerful abstractions such as dataframes are only as efficient as their underlying runtime system. The de-facto distributed data processing framework, Apache Spark, is poorly suited for the modern cloud-based data-science workloads due to its outdated assumptions: static datasets analyzed using coarse-grained transformations. In this paper, we introduce the Indexed DataFrame, an in-memory cache that supports a dataframe abstraction which incorporates indexing capabilities to support fast lookup and join operations. Moreover, it supports appends with multi-version concurrency control. We implement the Indexed DataFrame as a lightweight, standalone library which can be integrated with minimum effort in existing Spark programs. We analyze the performance of the Indexed DataFrame in cluster and cloud deployments with real-world datasets and benchmarks using both Apache Spark and Databricks Runtime. In our evaluation, we show that the Indexed DataFrame significantly speeds-up query execution when compared to a non-indexed dataframe, incurring modest memory overhead.
|, , , , , ,|
|36th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2022|
Uta, A, Ghit, B, Dave, A, Rellermeyer, J, & Boncz, P.A. (2022). In-memory indexed caching for distributed data processing. In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS) (pp. 104–114). doi:10.1109/IPDPS53621.2022.00019