#hdfs #hadoop #store #hdfs3

datafusion-objectstore-hdfs

A hdfs object store implemented the object store

5 releases

0.1.4 Jul 20, 2023
0.1.3 Apr 10, 2023
0.1.2 Oct 26, 2022
0.1.1 Oct 24, 2022
0.1.0 Sep 21, 2022

#597 in Filesystem

Download history 55/week @ 2023-12-16 163/week @ 2023-12-23 120/week @ 2023-12-30 101/week @ 2024-01-06 129/week @ 2024-01-13 148/week @ 2024-01-20 168/week @ 2024-01-27 158/week @ 2024-02-03 230/week @ 2024-02-10 223/week @ 2024-02-17 194/week @ 2024-02-24 217/week @ 2024-03-02 1145/week @ 2024-03-09 1168/week @ 2024-03-16 1686/week @ 2024-03-23 1551/week @ 2024-03-30

5,585 downloads per month
Used in 2 crates (via ballista-core)

Apache-2.0

31KB
521 lines

datafusion-objectstore-hdfs

HDFS as a remote ObjectStore for Datafusion.

Querying files on HDFS with DataFusion

This crate introduces HadoopFileSystem as a remote ObjectStore which provides the ability of querying on HDFS files.

For the HDFS access, We leverage the library fs-hdfs. Basically, the library only provides Rust FFI APIs for the libhdfs which can be compiled by a set of C files provided by the official Hadoop Community.

Prerequisites

Since the libhdfs is also just a C interface wrapper and the real implementation for the HDFS access is a set of Java jars, in order to make this crate work, we need to prepare the Hadoop client jars and the JRE environment.

Prepare JAVA

  1. Install Java.

  2. Specify and export JAVA_HOME.

Prepare Hadoop client

  1. To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors. Currently, we support Hadoop-2 and Hadoop-3.

  2. Unpack the downloaded Hadoop distribution. For example, the folder is /opt/hadoop. Then prepare some environment variables:

export HADOOP_HOME=/opt/hadoop

export PATH=$PATH:$HADOOP_HOME/bin

Prepare JRE environment

  1. Firstly, we need to add library path for the jvm related dependencies. An example for MacOS,
export DYLD_LIBRARY_PATH=$JAVA_HOME/jre/lib/server
  1. Since our compiled libhdfs is JNI native implementation, it requires the proper CLASSPATH to load the Hadoop related jars. An example,
export CLASSPATH=$CLASSPATH:`hadoop classpath --glob`

Examples

Suppose there's a hdfs directory,

let hdfs_file_uri = "hdfs://localhost:8020/testing/tpch_1g/parquet/line_item";

in which there're a list of parquet files. Then we can query on these parquet files as follows:

let ctx = SessionContext::new();
let url = Url::parse("hdfs://").unwrap();
ctx.runtime_env().register_object_store(&url, Arc::new(HadoopFileSystem));
let table_name = "line_item";
println!(
    "Register table {} with parquet file {}",
    table_name, hdfs_file_uri
);
ctx.register_parquet(table_name, &hdfs_file_uri, ParquetReadOptions::default()).await?;

let sql = "SELECT count(*) FROM line_item";
let result = ctx.sql(sql).await?.collect().await?;

Testing

  1. First clone the test data repository:
git submodule update --init --recursive
  1. Run testing
cargo test

During the testing, a HDFS cluster will be mocked and started automatically.

  1. Run testing for with enabling feature hdfs3
cargo build --no-default-features --features datafusion-objectstore-hdfs/hdfs3,datafusion-objectstore-hdfs-testing/hdfs3,datafusion-hdfs-examples/hdfs3

cargo test --no-default-features --features datafusion-objectstore-hdfs/hdfs3,datafusion-objectstore-hdfs-testing/hdfs3,datafusion-hdfs-examples/hdfs3

Run the ballista-sql test by

cargo run --bin ballista-sql --no-default-features --features hdfs3

Dependencies

~8–17MB
~215K SLoC