Big Data in R with Arrow
Data analysis pipelines with larger-than-memory data are becoming more and more commonplace. In this workshop you will be introduced to Apache Arrow, a multi-language toolbox for working with larger-than-memory tabular data, to create seamless “big” data analysis pipelines with R.
This workshop will focus on using the arrow R package—a mature R interface to Apache Arrow—to process larger-than-memory files and multi-file data sets with arrow using familiar dplyr syntax. You’ll learn to create and use the interoperable data file format Parquet for efficient data storage and access, with data stored both on disk and in the cloud, and also how to exercise fine control over data types to avoid common large data pipeline problems. Designed for new-to-arrow R users, this workshop will provide a foundation for using Arrow, giving you access to a powerful suite of tools for performant analysis of larger-than-memory tabular data in R.
This course is for you if you:
- Want to learn how to work with tabular data that is too large to fit in memory using existing R and tidyverse syntax implemented in Arrow
- Want to learn about Parquet, a powerful file format alternative to CSV files
- Want to learn how to engineer your tabular data storage for more performant access and analysis with Apache Arrow