4 releases
0.1.3 | Oct 22, 2023 |
---|---|
0.1.2 | Apr 16, 2022 |
0.1.1 | Mar 19, 2021 |
0.1.0 | Feb 21, 2021 |
#1901 in Parser implementations
51KB
1K
SLoC
kafcat
Kafcat is a Rust fully async rewrite of kafkacat.
I was trying to copy some data from kafka on a remote server to localhost for testing purpose. Kafkacat was the only usable tool to do so. However, it is written in C and does not support binary data at all. I was working on a pull request, but it haven't been merged yet, and the code was hard to maintain and error prone. So I wrote my own kafcat.
Features
It mimicks kafkacat's cli usage, but with extra features. It can
- read from stdin and send to kafka topic
- read from kafka to stdin
- copy from one topic to another(even on different servers)
- import/export data in json format
- swappable backend(only librdkafka for now, kafka-rust coming soon)
Drawbacks
Kafcat is still on early development, so
-
does not support TLS yet
-
does not support Apache AVRO
-
does not support complex input/output format
Basic Usage
-
Compile kafcat. You can find the binary at target/release/kafcat
cargo build --release
-
Make sure Kafka is running (We assume at
localhost:9092
). You can usedocker-compose -f tests/plaintext-server.yml up
(note that kafka in docker on MacOS is problematic, I couldn't even setup the proper testing environment via docker) -
Setup the listener, assuming
kafcat
is in current directory./kafcat -C --topic test
-
Setup the producer in another terminal,
./kafcat -P --topic test
-
Copy a topic. the format is
./kafcat copy <from> -- <to>
,from
andto
are exactly as used consumer and producer./kafcat copy --topic test -- --topic test2
-
Type any key and value with the default delimiter
:
. For example,hello:world
. -
You should see
hello:world
in the consumer terminal. -
Detailed help can be found at
./kafcat --help
,./kafcat -C --help
,./kafcat -P --help
, etc. You can also check out kafkacat for reference.kafcat-consume USAGE: kafcat {consume, -C} [FLAGS] [OPTIONS] --topic <topic> FLAGS: -e, --exit Exit successfully when last message received -h, --help Prints help information -V, --version Prints version information OPTIONS: -b, --brokers <brokers> Broker list in kafka format [default: localhost:9092] -s, --format <format> Serialize/Deserialize format [default: text] -G, --group-id <group-id> Consumer group id. (Kafka >=0.9 balanced consumer groups) [default: kafcat] -K <key-delimiter> Delimiter to split input key and message [default: :] -D <msg-delimiter> Delimiter to split input into messages(currently only supports '\n') [default: ] -o <offset> Offset to start consuming from: beginning | end | stored | <value> (absolute offset) | -<value> (relative offset from end) s@<value> (timestamp in ms to start at) e@<value> (timestamp in ms to stop at (not included))[default: beginning] -p, --partition <partition> Partition -t, --topic <topic> Topic kafcat-produce USAGE: kafcat {produce, -P} [OPTIONS] --topic <topic> FLAGS: -h, --help Prints help information -V, --version Prints version information OPTIONS: -b, --brokers <brokers> Broker list in kafka format [default: localhost:9092] -s, --format <format> Serialize/Deserialize format [default: text] -G, --group-id <group-id> Consumer group id. (Kafka >=0.9 balanced consumer groups) [default: kafcat] -K <key-delimiter> Delimiter to split input key and message [default: :] -D <msg-delimiter> Delimiter to split input into messages(currently only supports '\n') [default: ] -p, --partition <partition> Partition -t, --topic <topic> Topic kafcat-copy Copy mode accepts two parts of arguments <from> and <to>, the two parts are separated by [--]. <from> is the exact as Consumer mode, and <to> is the exact as Producer mode. USAGE: kafcat copy <from>... [--] <to>... ARGS: <from>... <to>... FLAGS: -h, --help Prints help information -V, --version Prints version information
Programming Style
git rebase
onto master branch when possible- Squash commits before push
Dependencies
~22–34MB
~495K SLoC