2 unstable releases

Uses new Rust 2024

0.8.0 Mar 22, 2025
0.7.0 Jan 23, 2025

#539 in Magic Beans

Download history 115/week @ 2025-01-22 4/week @ 2025-02-05 32/week @ 2025-02-12 9/week @ 2025-02-26 122/week @ 2025-03-19

131 downloads per month

MIT license

1.5MB
32K SLoC

Miden proving service

A service for generating Miden proofs on-demand. The binary enables spawning workers and a proxy for Miden's remote proving service. It currently supports proving individual transactions, transaction batches, and blocks.

The worker is a gRPC service that can receive transaction witnesses or proposed batches and returns the proof. It can only handle one request at a time and returns an error if it is already in use.

The proxy uses Cloudflare's Pingora crate, which provides features to create a modular proxy. It is meant to handle multiple workers with a queue, assigning a worker to each request and retrying if the worker is not available. Further information about Pingora and its features can be found in the official GitHub repository.

Installation

To build the service from a local version, from the root of the workspace you can run:

make install-proving-service

The CLI can be installed from the source code using specific git revisions with cargo install or from crates.io with cargo install miden-proving-service.

Worker

To start the worker service you will need to run:

miden-proving-service start-worker --host 0.0.0.0 --port 8082 --tx-prover --batch-prover

This will spawn a worker using the hosts and ports defined in the command options. In case that one of the values is not present, it will default to 0.0.0.0 for the host and 50051 for the port. This command will start a worker that can handle transaction and batch proving requests.

Note that the worker service can be started with the --tx-prover, --batch-prover, and --block-prover flags, to handle transaction, batch, and block proving requests, respectively, or it can be with any combination of them to handle multiple types of requests.

Proxy

To start the proxy service, you will need to run:

miden-proving-service start-proxy [worker1] [worker2] ... [workerN]

For example:

miden-proving-service start-proxy 0.0.0.0:8084 0.0.0.0:8085

This command will start the proxy using the workers passed as arguments. The workers should be in the format host:port. If no workers are passed, the proxy will start without any workers and will not be able to handle any requests until one is added through the miden-proving-service add-worker command.

You can customize the proxy service by setting environment variables. Possible customizations can be found by running miden-proving-service start-proxy --help.

An example .env file is provided in the crate's root directory. To use the variables from a file in any Unix-like operating system, you can run source <your-file>.

At the moment, when a worker added to the proxy stops working and can not connect to it for a request, the connection is marked as retriable meaning that the proxy will try reaching another worker. The number of retries is configurable via the MPS_MAX_RETRIES_PER_REQUEST environmental variable.

Updating workers on a running proxy

To update the workers on a running proxy, two commands are provided: add-worker and remove-worker. These commands will update the workers on the proxy and will not require a restart. To use these commands, you will need to run:

miden-proving-service add-worker --proxy-host <proxy-host> --proxy-update-workers-port <proxy-update-workers-port> [worker1] [worker2] ... [workerN]
miden-proving-service remove-worker --proxy-host <proxy-host> --proxy-update-workers-port <proxy-update-workers-port> [worker1] [worker2] ... [workerN]

For example:

# To add 0.0.0.0:8085 and 200.58.70.4:50051 to the workers list:
miden-proving-service add-workers --proxy-host 0.0.0.0 --proxy-update-workers-port 8083 0.0.0.0:8085 200.58.70.4:50051
# To remove 158.12.12.3:8080 and 122.122.6.6:50051 from the workers list:
miden-proving-service remove-workers --proxy-host 0.0.0.0 --proxy-update-workers-port 8083 158.12.12.3:8080 122.122.6.6:50051

The --proxy-host and --proxy-update-workers-port flags are required to specify the proxy's host and the port where the proxy is listening for updates. The workers are passed as arguments in the format host:port. Both flags can be used from environment variables, MPS_PROXY_HOST and MPS_PROXY_UPDATE_WORKERS_PORT respectively. For example:

export MPS_PROXY_HOST="0.0.0.0"
export MPS_PROXY_UPDATE_WORKERS_PORT="8083"
miden-proving-service add-workers 0.0.0.0:8085

Note that, in order to update the workers, the proxy must be running in the same computer as the command is being executed because it will check if the client address is localhost to avoid any security issues.

Health check

The worker service implements the gRPC Health Check standard, and includes the methods described in this official proto file.

The proxy service uses this health check to determine if a worker is available to receive requests. If a worker is not available, it will be removed from the set of workers that the proxy can use to send requests.

Logging and Tracing

The service uses the tracing crate for both logging and distributed tracing, providing structured, high-performance logs and trace data.

By default, logs are written to stdout and the default logging level is info. This can be changed via the RUST_LOG environment variable. For example:

export RUST_LOG=debug

For tracing, we use OpenTelemetry protocol. By default, traces are exported to the endpoint specified by OTEL_EXPORTER_OTLP_ENDPOINT environment variable. To consume and visualize these traces we can use Jaeger or any other OpenTelemetry compatible consumer.

The simplest way to install Jaeger is by using a Docker container. To do so, run:

docker run -d -p4317:4317 -p16686:16686 jaegertracing/all-in-one:latest

Then access the Jaeger UI at http://localhost:16686/.

If Docker is not an option, Jaeger can also be set up directly on your machine or hosted in the cloud. See the Jaeger documentation for alternative installation methods.

Metrics

The proxy includes a service that can optionally expose metrics to be consumed by Prometheus. This service is controlled by the enable_metrics configuration option.

Enabling Prometheus Metrics

To enable Prometheus metrics, set the enable_metrics field to true. This can be done via environment variables or command-line arguments.

Using Environment Variables

Set the following environment variable:

export MPS_ENABLE_METRICS=true

Using Command-Line Arguments

Pass the --enable-metrics flag when starting the proxy:

miden-proving-service start-proxy --enable-metrics [worker1] [worker2] ... [workerN]

When enabled, the Prometheus metrics will be available at the host and port specified by the prometheus_host and prometheus_port fields in the configuration. By default, these are set to 0.0.0.0 and 9090, respectively.

If metrics are not enabled, the proxy will log that Prometheus metrics are not enabled.

The metrics architecture works by having the proxy expose metrics at an endpoint (/metrics) in a format Prometheus can read. Prometheus periodically scrapes this endpoint, adds timestamps to the metrics, and stores them in its time-series database. Then, we can use tools like Grafana to query Prometheus and visualize these metrics in configurable dashboards.

The simplest way to install Prometheus and Grafana is by using Docker containers. To do so, run:

docker run \
    -d \
    -p 9090:9090 \
    -v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
    prom/prometheus

docker run -d -p 3000:3000 --name grafana grafana/grafana-enterprise:latest

In case that Docker is not an option, Prometheus and Grafana can also be set up directly on your machine or hosted in the cloud. See the Prometheus documentation and Grafana documentation for alternative installation methods.

A prometheus configuration file is provided in this repository, you will need to modify the scrape_configs section to include the host and port of the proxy service.

Then, to add the new Prometheus collector as a datasource for Grafana, you can follow this tutorial. A Grafana dashboard under the name proxy_grafana_dashboard.json is provided, see this link to import it. Otherwise, you can create your own dashboard using the metrics provided by the proxy and export it by following this link.

Features

Description of this crate's feature:

Features Description
concurrent Enables concurrent code to speed up runtime execution.

License

This project is MIT licensed.

Dependencies

~53–87MB
~1.5M SLoC