#streaming #sql #data-processing #data #event-processing

app arroyo

Arroyo is a distributed stream processor that lets users ask complex questions of high-volume real-time data by writing SQL. This CLI can be used to run Arroyo clusters in Docker

2 unstable releases

0.7.0 Oct 17, 2023
0.6.0 Sep 29, 2023

#280 in Database implementations

MIT/Apache

16KB
262 lines

Arroyo

Arroyo Cloud | Getting started | Docs | Discord | Website

Arroyo is dual-licensed under Apache 2 and MIT licenses. PRs welcome! git commit activity CI GitHub release (latest by date)

Arroyo is a distributed stream processing engine written in Rust, designed to efficiently perform stateful computations on streams of data. Unlike traditional batch processing, streaming engines can operate on both bounded and unbounded sources, emitting results as soon as they are available.

In short: Arroyo lets you ask complex questions of high-volume real-time data with subsecond results.

running job

Features

πŸ¦€ SQL and Rust pipelines

πŸš€ Scales up to millions of events per second

πŸͺŸ Stateful operations like windows and joins

πŸ”₯State checkpointing for fault-tolerance and recovery of pipelines

πŸ•’ Timely stream processing via the Dataflow model

Use cases

Some example use cases include:

  • Detecting fraud and security incidents
  • Real-time product and business analytics
  • Real-time ingestion into your data warehouse or data lake
  • Real-time ML feature generation

Why Arroyo

There are already a number of existing streaming engines out there, including Apache Flink, Spark Streaming, and Kafka Streams. Why create a new one?

  • Serverless operations: Arroyo pipelines are designed to run in modern cloud environments, supporting seamless scaling, recovery, and rescheduling
  • High performance SQL: SQL is a first-class concern, with consistently excellent performance
  • Designed for non-experts: Arroyo cleanly separates the pipeline APIs from its internal implementation. You don't need to be a streaming expert to build real-time data pipelines.

Getting Started

You can get started with a single node Arroyo cluster by running the following docker command:

$ docker run -p 8000:8000 ghcr.io/arroyosystems/arroyo-single:latest

or if you have Cargo installed, you can use the arroyo cli:

$ cargo install arroyo
$ arroyo start

Then, load the Web UI at http://localhost:8000.

For a more in-depth guide, see the getting started guide.

Once you have Arroyo running, follow the tutorial to create your first real-time pipeline.

Developing Arroyo

We love contributions from the community! See the developer setup guide to get started, and reach out to the team on discord or create an issue.

Community

Arroyo Cloud

Don't want to self-host? Arroyo Systems provides fully-managed cloud hosting for Arroyo. Sign up here.

Dependencies

~16–30MB
~479K SLoC