12 releases (breaking)

0.13.0 Oct 23, 2024
0.11.0 Jan 22, 2024
0.10.0 Oct 16, 2023
0.9.0 May 3, 2023
0.1.0 Feb 19, 2019

#30 in Unix APIs

Download history 5222/week @ 2024-08-20 6870/week @ 2024-08-27 6671/week @ 2024-09-03 6342/week @ 2024-09-10 6105/week @ 2024-09-17 6583/week @ 2024-09-24 6378/week @ 2024-10-01 6558/week @ 2024-10-08 7624/week @ 2024-10-15 6982/week @ 2024-10-22 6642/week @ 2024-10-29 6070/week @ 2024-11-05 4321/week @ 2024-11-12 4464/week @ 2024-11-19 7042/week @ 2024-11-26 5329/week @ 2024-12-03

22,106 downloads per month

Apache-2.0 AND BSD-3-Clause

300KB
7K SLoC

Contains (Windows exe, 5KB) src/loader/pe/test_riscv64_image.bin, (ELF exe/lib, 1KB) src/loader/elf/test_bad_align.bin, (ELF exe/lib, 1KB) src/loader/elf/test_dummy_note.bin, (ELF exe/lib, 1KB) src/loader/elf/test_elf.bin, (ELF exe/lib, 1KB) src/loader/elf/test_elfnote.bin, (ELF exe/lib, 1KB) test_elfnote_8byte_align.bin and 1 more.

Linux-loader

crates.io docs.rs

The linux-loader crate offers support for loading raw ELF (vmlinux) and compressed big zImage (bzImage) format kernel images on x86_64 and PE (Image) kernel images on aarch64 and riscv64. ELF support includes the Linux and PVH boot protocols.

The linux-loader crate is not yet fully independent and self-sufficient, and much of the boot process remains the VMM's responsibility. See [Usage] for details.

Supported features

Usage

Booting a guest using the linux-loader crate involves several steps, depending on the boot protocol used. A simplified overview follows.

Consider an x86_64 VMM that:

  • interfaces with linux-loader;
  • uses GuestMemoryMmap for its guest memory backend;
  • loads an ELF kernel image from a File.

Loading the kernel

One of the first steps in starting the guest is to load the kernel from a Reader into guest memory. For this step, the VMM is required to have configured its guest memory.

In this example, the VMM specifies both the kernel starting address and the starting address of high memory.

use linux_loader::loader::elf::Elf as Loader;
use vm_memory::GuestMemoryMmap;

use std::fs::File;
use std::result::Result;

impl MyVMM {
    fn start_vm(&mut self) {
        let guest_memory = self.create_guest_memory();
        let kernel_file = self.open_kernel_file();

        let load_result = Loader::load::<File, GuestMemoryMmap>(
            &guest_memory,
            Some(self.kernel_start_addr()),
            &mut kernel_file,
            Some(self.himem_start_addr()),
        )
        .expect("Failed to load kernel");
    }
}

Configuring the devices and kernel command line

After the guest memory has been created and the kernel parsed and loaded, the VMM will optionally configure devices and the kernel command line. The latter can then be loaded in guest memory.

impl MyVMM {
    fn start_vm(&mut self) {
        ...
        let cmdline_size = self.kernel_cmdline().as_str().len() + 1;
        linux_loader::loader::load_cmdline::<GuestMemoryMmap>(
            &guest_memory,
            self.cmdline_start_addr(),
            &CString::new(kernel_cmdline).expect("Failed to parse cmdline")
        ).expect("Failed to load cmdline");
    }

Configuring boot parameters

The VMM sets up initial registry values in this phase, without using linux-loader. It can also configure additional boot parameters, using the structs exported by linux-loader.

use linux_loader::configurator::linux::LinuxBootConfigurator;
use linux_loader::configurator::{BootConfigurator, BootParams};

impl MyVMM {
    fn start_vm(&mut self) {
        ...
        let mut bootparams = boot_params::default();
        self.configure_bootparams(&mut bootparams);
        LinuxBootConfigurator::write_bootparams(
            BootParams::new(&params, self.zeropage_addr()),
            &guest_memory,
        ).expect("Failed to write boot params in guest memory");
    }

Done!

Testing

See docs/TESTING.md.

License

This project is licensed under either of:

Dependencies

~0.6–1.3MB
~24K SLoC