21 unstable releases (3 breaking)
|new 0.4.2||Apr 16, 2019|
|0.4.1||Apr 13, 2019|
|0.4.0||Mar 18, 2019|
|0.3.2||Dec 17, 2018|
|0.1.5||Nov 19, 2018|
#43 in Unix APIs
97 downloads per month
YAML driven container workflow orchestration
- Designed to work with
nvidia-docker2and operationalize very complex GPU workflows
- Language agnostic symbol namespace (notice the
- Static/dynamic impersonation (setuid and optionally create&reference host user) to ensure correct privileges
- Internal security hardening by whitelisting a small subset of super user capabilities
- SETUID, SETGID, CHOWN
- X11 Graphics by setting
- Each step can run in a different container (or on host) to support imcompatible dependencies in the same workflow
- Support specifying IPC and network namespace
- Simple action call convention:
- Minimal command line arguments to launch a workflow:
- Colorful logging for readability
- (Optional) Docker CE
- Add a function
def something(ctx). Current execution context is in
ctxas dict. Keys can also be accessed as attributes to save a lot of brackets and quotes. And be sure to declare this function to playbook as a symbol.
## hello_world.py ## #[playbook(something)] def something(ctx): print(ctx.message)
- Add the source file path to the
## main.yml ## whitelist: - src: hello_world.py
- Add an entry to your YAML file in
actionis the step function name
## main.yml ## whitelist: - src: hello_world.py steps: - name: Some description here action: something message: Hello World!
- Run it!
$ playbook main.yml
Check Wiki for more details and examples.
There are features such as impersonate (aka container user provisioning) and others security enhancements,
which make this a better level of abstraction for research & development use.
Try and use
-vv to see for yourself how many options we are using in the docker commands / API invocations to run things more carefully and properly.
Viewing these as repetitive work that needs to be automated, this project is created!
All of it can be done absolutely with just docker or docker-compose (which is exactly how this works), but perhaps not without more boilerplates and scripting in every single project to be containerized.
Besides, we also aim to assist global system resource coordination, especially in a shared system. In the long term, we aim to support Kubernetes, SLURM, HTCondor or any other system we are interested in using.
Kubernetes focuses on workload, this focuses on workflow.
Firstly, any scripting language is out of the question because it is much more difficult to build a system with high reliability and sustainability requirements with languages so dynamic and tolerant to errors. We want the system to crash hard when there is the slightest amount of outdated codes or data structures and it better does not even compile, so that we become aware of it in the earliest time possible, as opposed to well after built into containers and delivered to users.
Secondly, the best AOT compiled commitment-free languages out there, imo, are C++, Rust and Go.
Lastly, we want to have a nice package manager, so Rust is the only option here.