#linux-kernel #scheduler #scheduling #bpf #deadlines #task-scheduling #decision

app scx_lavd

A Latency-criticality Aware Virtual Deadline (LAVD) scheduler based on sched_ext, which is a Linux kernel feature which enables implementing kernel thread schedulers in BPF and dynamically loading them. https://github.com/sched-ext/scx/tree/main

3 releases

0.1.3 Jun 3, 2024
0.1.2 Apr 29, 2024
0.1.1 Apr 4, 2024

#756 in Unix APIs

Download history 82/week @ 2024-04-01 7/week @ 2024-04-08 15/week @ 2024-04-15 184/week @ 2024-04-29 1/week @ 2024-05-06 4/week @ 2024-05-20 3/week @ 2024-05-27 160/week @ 2024-06-03

167 downloads per month

GPL-2.0-only

88KB
1.5K SLoC

C 1.5K SLoC // 0.4% comments Rust 287 SLoC // 0.1% comments

scx_lavd

This is a single user-defined scheduler used within sched_ext, which is a Linux kernel feature which enables implementing kernel thread schedulers in BPF and dynamically loading them. Read more about sched_ext.

Overview

scx_lavd is a BPF scheduler that implements an LAVD (Latency-criticality Aware Virtual Deadline) scheduling algorithm. While LAVD is new and still evolving, its core ideas are 1) measuring how much a task is latency critical and 2) leveraging the task's latency-criticality information in making various scheduling decisions (e.g., task's deadline, time slice, etc.). As the name implies, LAVD is based on the foundation of deadline scheduling. This scheduler consists of the BPF part and the rust part. The BPF part makes all the scheduling decisions; the rust part loads the BPF code and conducts other chores (e.g., printing sampled scheduling decisions).

Typical Use Case

scx_lavd is initially motivated by gaming workloads. It aims to improve interactivity and reduce stuttering while playing games on Linux. Hence, this scheduler's typical use case involves highly interactive applications, such as gaming, which requires high throughput and low tail latencies.

Production Ready?

This scheduler could be used in a production environment where the current code is optimized. The current code does not particularly consider multiple NUMA/CCX domains, so its scheduling decisions in such hardware would be suboptimal. This scheduler currently will mainly perform well on single CCX / single-socket hosts.

Dependencies

~26–37MB
~611K SLoC