#scheduler #distributed #zookeeper #clustering #cluster-analysis #batch-job

jotty

JoTTy is an embeddable distributed processing framework for both short and long running batch jobs

1 unstable release

0.3.1 Aug 22, 2021

#1058 in Concurrency

GPL-3.0-only

94KB
2K SLoC

JoTTy.

Just ?? Thingy. I forgot what o and T stand for. It'll come to me...

an embeddable scheduling framework to allow for distributed/asynchronous processing. Designed with a focus on resource scheduling and long/indefinite running jobs.

if you have used gRPC before, the simplest way to put it is:

gRPC, but you can execute methods asynchronously and come back later to fetch the results

what it is

  • asynchronous rpc framework, where a job fire off a job and retrieve the results whenever it wants
  • schemaless workflow framerwork. you define methods, and then jobs can be created on the fly
  • distributed job scheduler, where jobs can run indefinitely, on a schedule, or on demand

what it is not

  • kubernetes/nomad/mesos replacement. while you can build a distributed containerized orchestrator on top of jotty, jotty itsself is just the scheduler framework that can power those types of applications.
  • high performance rpc framework. while jotty includes a way to execute jobs synchronously, the primary goal is to be
    able to fire and forget rpc calls. the synchronous system is included for the use cases where you want to directly invoke a method.

why

There are a lot of systems out there that provide a complete, opinionated product. If you want to run long running tasks, you're limited to a few options like nomad and kubernetes. but those are heavyweight systems that require you to build around them. if you want to run workflows, products like Airflow are amazing but they are rigid and require you to make fixed workflows for everything.

the rigidity of the products out there didn't appeal to me. For a cloud platform project, I want to be able to fire off a job to create the CreateVirtualMachine resource. This would accept user input and figure out what needs to be done. There are two ways to do this (normally).

  1. Create an API endpoint that accepts the user's input, figures out what needs to be done, and then create tasks in something like celery and tells the user that their request is processing. The problem here is that you might need to assign an IP to the virtual machine, but the underlying network has not been created yet. One task needs to run before the other. Celery supports priority execution, but that doesn't really work. If the network fails to be created, then everything should fail. This is looking a lot like a workflow at this point...

  2. A Workflow to perform the CreateVirtualMachine operation is the logical way to do this. You create a workflow to handle the user's request, tell them the job has been created, and then go do the job. Airflow (or other workflow engine) all seem (and this is in my limited research, I'm 100% sure you could work around this) to build around having a job file that contains your code which you load into their system. This means that all the logic needs to be in that file, and for complex workflows that's a lot of code. Each subsystem such as networking and storage- running in different locations for security reasons- would need to have their own API that the workflow calls.

while each way of doing this is possible for my use cases, nah. I wanted a framework I could build on top of. Building JoTTy provides that framework. Jobs are created defining what methods I want to run, in what order, what happens when a method fails, and what arguments are passed to each method. Methods can return data that can be accessed by other methods and data can be shared between them all. Methods can inject other methods to execute on the fly. All this together provides the best of both worlds (IMO) from celery and airflow (and like systems). I have a dumb task scheduler if I want, but I can layer on additional logic on how to and what to run.

inspired by:

architecture

jotty is designed around zkmq & zkstate, both of which are built on top of zookeeper. there are plans to eventually refactor the supporting packages to support other distributed strongly consistent k/v stores such as etcd.

zkmq provides a light weight messaging queue, which is used to queue client operations.

client operations, such as task creation, are queued for evaluation by the manager subsystem. the manager subsystem consumes operations, and attempts to place the operation on an executor based on a number of factors (such as utilization and requested resources). once the manager selects a suitable executor to run the task, zkstate is used to communicate changes.

each executor has a "shared state" stored in zookeeper via zkstate. this is the internal state of the executor- which tasks are running, resource status, and other data used in operation. But, zkstate stores this information in zookeeper which allows for two things. one, the executor can exit and pick up where it left off if it is configured as such. two, the manager subsystem can modify this state without sending RPC calls to the executor directly, as the executor subscribes to changes on this object.

components

executor

the executor is a program that defines methods and handlers for each method.

manager

the manager is a set of routines that preforms monitoring of state and processing of changes, such as scheduling tasks

client

the client is what is creating tasks

job types.

infinite.

infinite jobs are jobs that will be rescheduled upon exit until manually stopped.

finite.

limited jobs are jobs that will be rescheduled upon exit until the counter is exhausted.

scheduling policy.

immediate.

immediate jobs will be executed as soon as possible

scheduled.

scheduled execution types are jobs that will be run at a specified interval referenced to the reference time.

on insert, evaluate when the next run should take place. If reference time is 2021-01-31 21:01:00, and interval is 1D. evaluation should be: - 2021-01-31 21:01:00 > 2021-02-04 09:56:00 == false - 2021-02-01 21:01:00 > 2021-02-04 09:56:00 == false (+1 day) - 2021-02-02 21:01:00 > 2021-02-04 09:56:00 == false (+1 day) - 2021-02-03 21:01:00 > 2021-02-04 09:56:00 == false (+1 day) - 2021-02-04 21:01:00 > 2021-02-04 09:56:00 == true (+1 day) resulting in 2021-02-04 21:01:00 being the next scheduled run

FAQ

why not just make a system that calls gRPC methods in the background?

the main reason is that I want the ability to allow indefinite service method execution. sure, gRPC supports this in theory by setting no deadlines. but what if the connection is interrupted? what if the server goes down- do you retry the request? there are a few big questions that would need to be answered to make it work. and once it's all said and done, you would get fairly close to what I have now. the only reason I'm used protobuf is that it provides a very nice and consistent way to declare inputs and outputs for service methods. it just so happens they overlap with gRPC.

Dependencies

~10–22MB
~309K SLoC