#data #hadoop #hdfs #mapreduce #streaming

efflux

Easy MapReduce and Hadoop Streaming interfaces in Rust

7 stable releases

2.0.1 Jan 15, 2019
2.0.0 Jan 12, 2019
1.2.0 Jan 10, 2019
1.1.0 Nov 29, 2018
1.0.1 Sep 9, 2018

#595 in Concurrency

Download history 14/week @ 2023-10-27 8/week @ 2023-11-03 4/week @ 2023-11-10 8/week @ 2023-11-17 12/week @ 2023-11-24 31/week @ 2023-12-01 12/week @ 2023-12-08 11/week @ 2023-12-15 18/week @ 2023-12-22 9/week @ 2023-12-29 4/week @ 2024-01-05 5/week @ 2024-01-12 10/week @ 2024-01-19 17/week @ 2024-01-26 16/week @ 2024-02-02 20/week @ 2024-02-09

64 downloads per month

MIT license

33KB
596 lines

Efflux

Crates.io Build Status

Efflux is a set of Rust interfaces for MapReduce and Hadoop Streaming. It enables Rust developers to run batch jobs on Hadoop infrastructure whilst staying with the efficiency and safety they're used to.

Initially written to scratch a personal itch, this crate offers simple traits to mask the internals of working with Hadoop Streaming which lend themselves well to writing jobs quickly. Functionality is handed off to macros where possible to provide compile time guarantees, and any other functionality is kept simple to avoid overhead wherever possible.

Installation

Efflux is available on crates.io as a library crate, so you only need to add it as a dependency:

[dependencies]
efflux = "2.0"

You can then gain access to everything relevant using the prelude module of Efflux:

use efflux::prelude::*;

Usage

Efflux comes with a handy template to help generate new projects, using the kickstart tool. You can simply use the commands below and follow the prompt to generate a new project skeleton:

# install kickstart
$ cargo install kickstart

# create a project from the template
$ kickstart -s examples/template https://github.com/whitfin/efflux

If you'd rather not use the templating tool, you can always work from the examples found in this repository. A good place to start is the traditional wordcount example.

Testing

Testing your binaries is actually fairly simple, as you can simulate the Hadoop phases using a basic UNIX pipeline. The following example replicates the Hadoop job flow and generates output that matches a job executed with Hadoop itself:

# example Hadoop task invocation
$ hadoop jar hadoop-streaming-2.8.2.jar \
    -input <INPUT> \
    -output <OUTPUT> \
    -mapper <MAPPER> \
    -reducer <REDUCER>

# example simulation run via UNIX utilities
$ cat <INPUT> | <MAPPER> | sort -k1,1 | <REDUCER> > <OUTPUT>

This can be tested using the wordcount example to confirm that the outputs are indeed the same. There may be some cases where output differs, but it should be sufficient for many cases.

Dependencies

~2.4–3.5MB
~54K SLoC