|31.0.0||Sep 18, 2023|
|28.0.0||Aug 6, 2023|
|27.0.0||Jul 8, 2023|
|26.0.0||Jun 15, 2023|
|20.0.0||Mar 21, 2023|
#164 in Database interfaces
1,236 downloads per month
DataFusion in Python
DataFusion's Python bindings can be used as an end-user tool as well as providing a foundation for building new systems.
- Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
- Queries are optimized using DataFusion's query optimizer.
- Execute user-defined Python code from SQL.
- Exchange data with Pandas and other DataFrame libraries that support PyArrow.
- Serialize and deserialize query plans in Substrait format.
- Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.
Comparison with other projects
Here is a comparison with similar projects that may help understand when DataFusion might be suitable and unsuitable for your needs:
DuckDB is an open source, in-process analytic database. Like DataFusion, it supports very fast execution, both from its custom file format and directly from Parquet files. Unlike DataFusion, it is written in C/C++ and it is primarily used directly by users as a serverless database and query system rather than as a library for building such database systems.
Polars is one of the fastest DataFrame libraries at the time of writing. Like DataFusion, it is also written in Rust and uses the Apache Arrow memory model, but unlike DataFusion it does not provide full SQL support, nor as many extension points.
The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.
The Parquet file used in this example can be downloaded from the following page:
from datafusion import SessionContext # Create a DataFusion context ctx = SessionContext() # Register table with context ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet') # Execute SQL df = ctx.sql("select passenger_count, count(*) " "from taxi " "where passenger_count is not null " "group by passenger_count " "order by passenger_count") # convert to Pandas pandas_df = df.to_pandas() # create a chart fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure() fig.savefig('chart.png')
This produces the following chart:
It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.
runtime = ( RuntimeConfig() .with_disk_manager_os() .with_fair_spill_pool(10000000) ) config = ( SessionConfig() .with_create_default_catalog_and_schema(True) .with_default_catalog_and_schema("foo", "bar") .with_target_partitions(8) .with_information_schema(True) .with_repartition_joins(False) .with_repartition_aggregations(False) .with_repartition_windows(False) .with_parquet_pruning(False) .set("datafusion.execution.parquet.pushdown_filters", "true") ) ctx = SessionContext(config, runtime)
Refer to the API documentation for more information.
Printing the context will show the current configuration settings.
See examples for more information.
Executing Queries with DataFusion
- Query a Parquet file using SQL
- Query a Parquet file using the DataFrame API
- Run a SQL query and store the results in a Pandas DataFrame
- Run a SQL query with a Python user-defined function (UDF)
- Run a SQL query with a Python user-defined aggregation function (UDAF)
- Query PyArrow Data
- Create dataframe
- Export dataframe
Running User-Defined Python Code
Executing SQL against DataFrame Libraries (Experimental)
How to install (from pip)
pip install datafusion # or python -m pip install datafusion
conda install -c conda-forge datafusion
You can verify the installation by running:
>>> import datafusion >>> datafusion.__version__ '0.6.0'
How to develop
The Maturin tools used in this workflow can be installed either via Conda or Pip. Both approaches should offer the same experience. Multiple approaches are only offered to appease developer preference. Bootstrapping for both Conda and Pip are as follows.
# fetch this repo git clone firstname.lastname@example.org:apache/arrow-datafusion-python.git # create the conda environment for dev conda env create -f ./conda/environments/datafusion-dev.yaml -n datafusion-dev # activate the conda environment conda activate datafusion-dev
# fetch this repo git clone email@example.com:apache/arrow-datafusion-python.git # prepare development environment (used to build wheel / install in development) python3 -m venv venv # activate the venv source venv/bin/activate # update pip itself if necessary python -m pip install -U pip # install dependencies (for Python 3.8+) python -m pip install -r requirements-310.txt
The tests rely on test data in git submodules.
git submodule init git submodule update
Whenever rust code changes (your changes or via
# make sure you activate the venv using "source venv/bin/activate" first maturin develop python -m pytest
Running & Installing pre-commit hooks
arrow-datafusion-python takes advantage of pre-commit to assist developers with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keeping PRs clean and concise.
Our pre-commit hooks can be installed by running
pre-commit install, which will install the configurations in your ARROW_DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete the commit if an offending lint is found allowing you to make changes locally before pushing.
The pre-commit hooks can also be run adhoc without installing them by simply running
pre-commit run --all-files
How to update dependencies
To change test dependencies, change the
requirements.in and run
# install pip-tools (this can be done only once), also consider running in venv python -m pip install pip-tools python -m piptools compile --generate-hashes -o requirements-310.txt
To update dependencies, run with
python -m piptools compile -U --generate-hashes -o requirements-310.txt
More details here