#quality #bindings #perceptual #vmaf #netflix #wrapper

libvmaf-rs

(WIP) Ergonomic bindings for Netflix's libvmaf

19 releases

0.5.2 May 4, 2024
0.5.1 Dec 14, 2023
0.4.0 Feb 24, 2023
0.3.4 Jan 29, 2023
0.1.3 Dec 21, 2022

#106 in Video

Download history 59/week @ 2024-02-25 18/week @ 2024-03-03 69/week @ 2024-03-31 171/week @ 2024-04-28 22/week @ 2024-05-05

193 downloads per month

GPL-3.0-or-later

440KB
785 lines

libvmaf-rs intends to be an ergonomic wrapper around the raw library bindings for Netflix's libvmaf from libvmaf-sys.

VMAF is an Emmy-winning perceptual video quality assessment algorithm developed by Netflix. It is a full-reference metric, meaning that it is calculated on pairs of reference/distorted pictures

Getting started:

First, construct Videos from video files for both your reference and distorted(compressed) video files.

This example uses the same file for both reference and distorted, but normally distorted would be a compressed video while reference would point to the original, uncompressed video

let reference: Video = Video::new(&"./video/Big Buck Bunny 720P.m4v", 1920, 1080).unwrap();
let distorted: Video = Video::new(&"./video/Big Buck Bunny 720P.m4v", 1920, 1080).unwrap();

Now, you need to load a model,

let model: Model = Model::default();

Optionally, you may define a callback function. This is useful if you want updates on the progress of VMAF score calculation

let callback = |status: VmafStatus| match status {
VmafStatus::Decode => dostuff(),
VmafStatus::GetScore => dostuff(),
};

Now we construct a Vmaf context

let vmaf = Vmaf::new(
VmafLogLevel::VMAF_LOG_LEVEL_DEBUG,
num_cpus::get().try_into().unwrap(),
0,
0,
)

To get a vector of scores for every frame, we may use the following method on our new Vmaf context:

let scores = vmaf
.get_vmaf_scores(reference, distorted, model, Some(callback))
.unwrap();

Dependencies

~3.5–6.5MB
~128K SLoC