11 releases (4 breaking)

0.5.0 Apr 11, 2024
0.4.4 Mar 4, 2024
0.4.2 Feb 7, 2024
0.3.1 Jan 26, 2024
0.1.0 Jan 8, 2024

#2229 in Database interfaces


Used in 2 crates

MIT/Apache

300KB
6.5K SLoC

simple_pg_client

A work in progress, fork of tokio-postgres, to deeply intergrate with a pool and SQL Query builder.

Primary Goals:

  • Deeply integrate query builder.
  • Unify client types so generic client isn't needed. Replaced by impl Deref<Conn>.
  • Provided methods for fast & parallel unit testing via schema universes.
  • Optimize performance for our use cases.
  • Reduce incremental compile times via reducing monomorphization.

lib.rs:

An asynchronous, pipelined, PostgreSQL client. Based of tokio-postgres.

Example

use simple_pg_client::{NoTls, Error};

#[tokio::main] // By default, simple_pg_client uses the tokio crate as its runtime.
async fn main() -> Result<(), Error> {
    // Connect to the database.
    let (client, connection) =
        simple_pg_client::connect("host=localhost user=postgres", NoTls).await?;

    // The connection object performs the actual communication with the database,
    // so spawn it off to run on its own.
    tokio::spawn(async move {
        if let Err(e) = connection.await {
            eprintln!("connection error: {}", e);
        }
    });

    // Now we can execute a simple statement that just returns its parameter.
    let rows = client
        .query("SELECT $1::TEXT", &[&"hello world"])
        .await?;

    // And then check that we got back the same string we sent over.
    let value: &str = rows[0].get(0);
    assert_eq!(value, "hello world");

    Ok(())
}

Behavior

Calling a method like Client::query on its own does nothing. The associated request is not sent to the database until the future returned by the method is first polled. Requests are executed in the order that they are first polled, not in the order that their futures are created.

Pipelining

The client supports pipelined requests. Pipelining can improve performance in use cases in which multiple, independent queries need to be executed. In a traditional workflow, each query is sent to the server after the previous query completes. In contrast, pipelining allows the client to send all of the queries to the server up front, minimizing time spent by one side waiting for the other to finish sending data:

            Sequential                              Pipelined
| Client         | Server          |    | Client         | Server          |
|----------------|-----------------|    |----------------|-----------------|
| send query 1   |                 |    | send query 1   |                 |
|                | process query 1 |    | send query 2   | process query 1 |
| receive rows 1 |                 |    | send query 3   | process query 2 |
| send query 2   |                 |    | receive rows 1 | process query 3 |
|                | process query 2 |    | receive rows 2 |                 |
| receive rows 2 |                 |    | receive rows 3 |                 |
| send query 3   |                 |
|                | process query 3 |
| receive rows 3 |                 |

In both cases, the PostgreSQL server is executing the queries sequentially - pipelining just allows both sides of the connection to work concurrently when possible.

Pipelining happens automatically when futures are polled concurrently (for example, by using the futures join combinator):

use futures_util::future;
use std::future::Future;
use simple_pg_client::{Client, Error, Statement};

async fn pipelined_prepare(
    client: &Client,
) -> Result<(Statement, Statement), Error>
{
    future::try_join(
        client.prepare("SELECT * FROM foo"),
        client.prepare("INSERT INTO bar (id, name) VALUES ($1, $2)")
    ).await
}

Runtime

The client works with arbitrary AsyncRead + AsyncWrite streams. Convenience APIs are provided to handle the connection process, but these are gated by the runtime Cargo feature, which is enabled by default. If disabled, all dependence on the tokio runtime is removed.

SSL/TLS support

TLS support is implemented via external libraries. Client::connect and Config::connect take a TLS implementation as an argument. The NoTls type in this crate can be used when TLS is not required. Otherwise, the postgres-openssl and postgres-native-tls crates provide implementations backed by the openssl and native-tls crates, respectively.

Features

The following features can be enabled from Cargo.toml:

Feature Description Extra dependencies Default
runtime Enable convenience API for the connection process based on the tokio crate. tokio 1.0 with the features net and time yes
array-impls Enables ToSql and FromSql trait impls for arrays - no
with-bit-vec-0_6 Enable support for the bit-vec crate. bit-vec 0.6 no
with-chrono-0_4 Enable support for the chrono crate. chrono 0.4 no
with-eui48-0_4 Enable support for the 0.4 version of the eui48 crate. This is deprecated and will be removed. eui48 0.4 no
with-eui48-1 Enable support for the 1.0 version of the eui48 crate. eui48 1.0 no
with-geo-types-0_6 Enable support for the 0.6 version of the geo-types crate. geo-types 0.6 no
with-geo-types-0_7 Enable support for the 0.7 version of the geo-types crate. geo-types 0.7 no
with-serde_json-1 Enable support for the serde_json crate. serde_json 1.0 no
with-uuid-0_8 Enable support for the uuid crate. uuid 0.8 no
with-uuid-1 Enable support for the uuid crate. uuid 1.0 no
with-time-0_2 Enable support for the 0.2 version of the time crate. time 0.2 no
with-time-0_3 Enable support for the 0.3 version of the time crate. time 0.3 no

Dependencies

~6–18MB
~253K SLoC