#metrics #statsd #tags #proxy #transformation #routing #cardinality

bin+lib statsdproxy

A proxy for transforming, pre-aggregating and routing statsd metrics

2 releases

0.1.2 Jan 29, 2024
0.1.1 Jan 10, 2024

#1537 in Network programming

Download history 1359/week @ 2024-01-22 2224/week @ 2024-01-29 1324/week @ 2024-02-05 2439/week @ 2024-02-12 1714/week @ 2024-02-19 2536/week @ 2024-02-26 2021/week @ 2024-03-04 1632/week @ 2024-03-11 1666/week @ 2024-03-18 2009/week @ 2024-03-25 1830/week @ 2024-04-01 2246/week @ 2024-04-08 2880/week @ 2024-04-15 2206/week @ 2024-04-22 1652/week @ 2024-04-29 1596/week @ 2024-05-06

8,378 downloads per month


1.5K SLoC


A proxy for transforming, pre-aggregating and routing statsd metrics, like Veneur, Vector or Brubeck.

Currently supports the following transformations:

  • Deny- or allow-listing of specific tag keys or metric names
  • Adding hardcoded tags to all metrics
  • Basic cardinality limiting, tracking the number of distinct tag values per key or the number of overall timeseries (=combinations of metrics and tags).

See example.yml for details.

A major goal is minimal overhead and no loss of information due to unnecessarily strict parsing. Statsdproxy intends to orient itself around dogstatsd protocol but should gracefully degrade for other statsd dialects, in that those metrics and otherwise unparseable bytes will be forwarded as-is.

This is not a Sentry product, not deployed in any sort of production environment, but a side-project done during Hackweek.

Basic usage

  1. Run a "statsd server" on port 8081 that just prints metrics

    socat -u UDP-RECVFROM:8081,fork SYSTEM:"cat; echo"
  2. Copy example.yaml to config.yaml and edit it

  3. Run statsdproxy to read metrics from port 8080, transform them using the middleware in config.yaml and forward the new metrics to port 8081:

    cargo run --release -- --listen --upstream -c config.yaml
  4. Send metrics to statsdproxy:

    yes 'users.online:1|c|@0.5' | nc -u 8080
  5. You should see new metrics in socat with your middlewares applied.

Usage with Snuba

Patch the following settings in snuba/settings/__init__.py:


This will send metrics to port 8080.

Processing model

This is the processing model used by the provided server. It should be respected by any usage of this software as a library.

  • The server receives metrics as bytes over udp, either singly or several joined with \n.
  • For every metric received, the server invokes the poll method of the topmost middleware.
    • The middleware may use this invocation to do any needed internal bookkeeping.
    • The middleware should then invoke the poll method of the next middleware, if any.
  • Once poll returns, the server invokes the submit method of the topmost middleware with a mutable reference to the current metric.
    • The middleware should process the metric.
      • If processing was successful, and if appropriate to its function (eg. a metric aggregator might hold onto metrics), the middleware should submit the processed metric to the next middleware, returning the result of this call.
      • If processing was unsuccessful (eg. unknown StatsD dialect), the unchanged metric should be treated as the processed metric, and passed on or held as above.
      • If a middleware becomes unable to handle more metrics during processing, such that it cannot handle the current metric, it should return Overloaded.
    • If an overload is indicated, the server shall pause (TODO: how long) before calling submit again with the same metric. (If an overload is indicated too many times, maybe drop the metric?)
  • Separately, if no metric is received by the server for 1 second, it will invoke the poll method of the topmost middleware. This invocation of poll should be handled the same as above.


~108K SLoC