22 releases (13 breaking)

0.14.1 Mar 11, 2024
0.13.1 Oct 11, 2023
0.12.0 Jun 29, 2023
0.10.0 Nov 11, 2022
0.2.0 Mar 19, 2020

#15 in Memory management

Download history 36370/week @ 2024-02-05 38357/week @ 2024-02-12 28315/week @ 2024-02-19 24481/week @ 2024-02-26 22470/week @ 2024-03-04 19560/week @ 2024-03-11 23165/week @ 2024-03-18 13610/week @ 2024-03-25 17100/week @ 2024-04-01 11526/week @ 2024-04-08 11681/week @ 2024-04-15 15432/week @ 2024-04-22 12604/week @ 2024-04-29 12511/week @ 2024-05-06 12892/week @ 2024-05-13 11657/week @ 2024-05-20

50,584 downloads per month
Used in 29 crates (27 directly)

Apache-2.0 OR BSD-3-Clause

6.5K SLoC


crates.io docs.rs


In a typical Virtual Machine Monitor (VMM) there are several components, such as boot loader, virtual device drivers, virtio backend drivers and vhost drivers, that need to access the VM physical memory. The vm-memory crate provides a set of traits to decouple VM memory consumers from VM memory providers. Based on these traits, VM memory consumers can access the physical memory of the VM without knowing the implementation details of the VM memory provider. Thus VMM components based on these traits can be shared and reused by multiple virtualization solutions.

The detailed design of the vm-memory crate can be found here.

Platform Support

  • Arch: x86, AMD64, ARM64
  • OS: Linux/Unix/Windows

Xen support

Supporting Xen requires special handling while mapping the guest memory and hence a separate feature is provided in the crate: xen. Mapping the guest memory for Xen requires an ioctl() to be issued along with mmap() for the memory area. The arguments for the ioctl() are received via the vhost-user protocol's memory region area.

Xen allows two different mapping models: Foreign and Grant.

In Foreign mapping model, the entire guest address space is mapped at once, in advance. In Grant mapping model, the memory for few regions, like those representing the virtqueues, is mapped in advance. The rest of the memory regions are mapped (partially) only while accessing the buffers and the same is immediately deallocated after the buffer is accessed. Hence, special handling for the same in VolatileMemory.rs.

In order to still support standard Unix memory regions, for special regions and testing, the Xen specific implementation here allows a third mapping type: MmapXenFlags::UNIX. This performs standard Unix memory mapping and the same is used for all tests in this crate.

It was decided by the rust-vmm maintainers to keep the interface simple and build the crate for either standard Unix memory mapping or Xen, and not both.

Xen is only supported for Unix platforms.


Add vm-memory as a dependency in Cargo.toml

vm-memory = "*"

Then add extern crate vm-memory; to your crate root.


  • Creating a VM physical memory object in hypervisor specific ways using the GuestMemoryMmap implementation of the GuestMemory trait:
fn provide_mem_to_virt_dev() {
    let gm = GuestMemoryMmap::from_ranges(&[
        (GuestAddress(0), 0x1000),
        (GuestAddress(0x1000), 0x1000)
  • Consumers accessing the VM's physical memory:
fn virt_device_io<T: GuestMemory>(mem: &T) {
    let sample_buf = &[1, 2, 3, 4, 5];
    assert_eq!(mem.write(sample_buf, GuestAddress(0xffc)).unwrap(), 5);
    let buf = &mut [0u8; 5];
    assert_eq!(mem.read(buf, GuestAddress(0xffc)).unwrap(), 5);
    assert_eq!(buf, sample_buf);


This project is licensed under either of


~22K SLoC