#google #cloud #data #api #big-table

bigtable

Lib for interfacing with Google BigTable Data API

15 releases

0.5.0 Dec 9, 2019
0.4.0 Oct 27, 2017
0.3.3 Mar 27, 2017
0.2.21 Mar 13, 2017
0.1.6 Jan 18, 2017

#526 in Authentication

Download history 32/week @ 2023-10-28 3/week @ 2023-11-04 2/week @ 2023-11-11 19/week @ 2023-11-18 75/week @ 2023-11-25 5/week @ 2023-12-02 19/week @ 2023-12-09 3/week @ 2023-12-16 31/week @ 2023-12-23 15/week @ 2023-12-30 1/week @ 2024-01-06 4/week @ 2024-01-13 26/week @ 2024-01-20 61/week @ 2024-01-27 2/week @ 2024-02-03 57/week @ 2024-02-10

149 downloads per month

MIT license

410KB
9K SLoC

MIT licensed Join the chat at https://gitter.im/durch/rust-bigtable

rust-bigtable [docs]

Rust library for working with Google Bigtable Data API

Intro

Interface towards Cloud Bigtable, supports all Data API methods.

  • CheckAndMutateRow
  • MutateRow
  • MutateRows
  • ReadModifyWriteRow
  • ReadRows
  • SampleRowKeys

Includes support for JWT auth:

How it works

Initial plans was to go full grpc over http/2, unfortunately Rust support is not there yet, so a middle way was taken :).

Requests objects are protobuf messages, generated using proto definitions available from Google. And all configuration is done through very nice interfaces generated in this way. These messages are then transparently converted to json, and sent to predefined google.api.http endpoints, also defined here. Responses are returned as serde_json::Value.

In theory this should enable easy upgrade to full grpc over http/2 as soon as it becomes viable, the only remaining work would be utilising proper return types, also available as protobuf messages.

Configuration

You can provide the json service accounts credentials obtained from Google Cloud Console or the private key file in pem or (see random_rsa_for_testing for proper format) format as well as Google Cloud service account with proper scopes (scopes are handled by goauth, as part of authentication),

Usage

In your Cargo.toml

[dependencies]
bigtable = '0.3'

Higher level wrappers (wraps)

There and higher wrappers available for reading and writing rows, so there is not need to craft protobufs manually. Write can also be used to update, but not very robustly yet, coming soon :).

Read and Write

Read wrappers allows for simple limit on the number of rows, it uses the ReadRows underlying method.

There are two write strategies available, bulk_write_rows and write_rows. bulk_write_rows first collects all writes and fires only one request, underlying method is MutateRows, this results in a much higher write throughput. write_rows shoots one request per row to be written, underlying method is ReadModifyWriteRow.


extern crate bigtable as bt;

use bt::utils::*;
use bt::wraps;

const TOKEN_URL: &'static str = "https://www.googleapis.com/oauth2/v4/token";
const ISS: &'static str = "some-service-account@developer.gserviceaccount.com";
const PK: &'static str = "pk_for_the_acc_above.pem";

fn read_rows(limit: i64) -> Result<(serde_json::Value), BTErr> {

    let token = get_auth_token(TOKEN_URL, ISS, PK)?;
    let table = Default::default();

    wraps::read_rows(table, &token, Some(limit))

}

fn write_rows(n: usize, bulk: bool) -> Result<(), BTErr> {
    let mut rows: Vec<wraps::Row> = vec!(wraps::Row::default()); // put some real data here
    let token = get_auth_token(TOKEN_URL, ISS, PK)?;
    let table = Default::default(); // Again use a real table here
    if bulk {
        let _ = wraps::bulk_write_rows(&mut rows, &token, table);
    } else {
        let _ = wraps::write_rows(&mut rows, &token, table);
    }
    Ok(())
}

Dependencies

~34–46MB
~821K SLoC