3 releases
0.1.2 | Jul 10, 2023 |
---|---|
0.1.1 | Jun 12, 2023 |
0.1.0 | Jun 7, 2023 |
#76 in #virtualization
25 downloads per month
78KB
802 lines
Proa for Kubernetes sidecar management
Inspired by https://github.com/redboxllc/scuttle, https://github.com/joho/godotenv, and https://github.com/kubernetes/enhancements/issues/753, among others. This program is meant to be the entrypoint for the "main" container in a Pod that also contains sidecar containers. Proa is a wrapper around the main process in the main container. It waits for the sidecars to be ready before starting the main program, and it shuts down the sidecars when the main process exits so the whole Pod can exit gracefully, as in the case of a Job.
Briefly, it does this:
- Watch its own Pod's spec and status.
- Wait until all containers (except its own) are ready.
- Start the main (wrapped) process and wait for it to exit.
- Perform some shutdown actions, hitting an HTTP endpoint on localhost or sending signals like
pkill
would. - Wait for the sidecars to exit.
If it encounters errors during shutdown, it logs each error, but it exits with the same exit code as the wrapped process.
Requirements
- Sidecars need readinessProbes.
- Service account needs permission to read and watch its own Pod.
Usage
If you like, just copy job.yaml and modify it for your use. The Job has a sidecar, simulated by a Python
script, that must be ready before the main process starts. We simulate a sidecar that starts slowly by sleeping for 30 seconds
before starting the Python HTTP server. Proa uses Kubernetes' knowledge about the readiness of the sidecar container. That means
the sidecars must each provide a readinessProbe
, and the Pod's serviceAccount
needs permission to read and watch the Pod it's
running in.
Or if you prefer, follow this step-by-step guide:
- Build a container image that has both your main application and the
proa
executable. The easiest way to do this is probably to use a multi-stage Dockerfile to compileproa
andCOPY
it into your final image. See Dockerfile for an example. - Create a
ServiceAccount
for your Job to use. - Create a
Role
andRoleBinding
giving the service account permission toget
,watch
, andlist
thepods
in its own namespace. - Modify the Job
spec.template.spec.serviceAccountName
to refer to that service account. - Modify the Job and ensure that the
spec.template.spec.containers
entry for every sidecar has areadinessProbe
. (It doesn't matter if the main container has a readiness probe; proa will ignore it.) - Change the entrypoint (
command
and/orargs
) of the main container to call proa.- Pass flags to tell proa how to shut down your sidecars. This will usually be
--shutdown-http-get=URL
or--shutdown-http-post=URL
. Those flags can be repeated multiple times. - Pass the separator string
--
, followed by the path to the main program and all its arguments.
- Pass flags to tell proa how to shut down your sidecars. This will usually be
- Optionally add a
RUST_LOG
environment variable to the main container to control proa's logging verbosity.
Killing
When it's time to shut down, proa can end the processes in your sidecars by sending SIGTERM, but it's probably not what you want. Most processes that receive SIGTERM will exit with status 143, or some other nonzero value. Kubernetes will interpret that as a container failure, and it will restart or recreate the Pod to try again.
If you're sure you want to use this, compile the program with feature kill
, and also make sure your Pod meets these requirements:
- Need to
shareProcessNamespace
so proa can stop the sidecars, and either- the main container with proa needs to run as UID 0 (not recommended)
- all containers need to run as the same UID.
- Don't use
hostPID
, or chaos will result as it tries to kill every process on the node.
Name
It's a program to manage sidecars, but sidecar is a motorcycle metaphor, and Kubernetes is all about nautical memes. A proa is a sailboat with an outrigger, which is sort of like a sidecar on a motorcycle.
Development
Requirements:
- Use nix and direnv, or install the tools manually. See flake.nix for the list of tools you'll need.
- Docker or equivalent.
kind create cluster
to start a tiny Kubernetes cluster in your local Docker.skaffold dev
to start the compile-build-deploy loop.
Every time you save a file, skaffold will rebuild and redeploy, then show you output from the containers in the Pod.
Sponsors
This project sponsored by IronCore Labs, makers of data privacy and security solutions for cloud apps including SaaS Shield for multi-tenant SaaS apps and Cloaked Search for data-in-use encryption over Elasticsearch and OpenSearch.
Dependencies
~67MB
~1M SLoC