4 releases (breaking)
0.4.0 | Dec 2, 2024 |
---|---|
0.3.0 | Nov 26, 2024 |
0.2.0 | Oct 23, 2024 |
0.1.0 | Feb 16, 2024 |
#130 in Embedded development
313 downloads per month
Used in memcom
57KB
713 lines
shared-mem-queue
This no_std
library implements simple lock-free single-writer single-reader queues which can
be used to communicate efficiently over shared memory. They can especially be used for
inter-process-communication or, on systems with multiple processors accessing the same memory,
even for inter-processor-communication. They can also be used as single-thread FIFO queues or to
communicate between threads or tasks in no_std
environments where the std::sync::mpsc
types, which are safer alternatives for these use cases, are not available.
Currently, this library implements two FIFO queue types:
ByteQueue
: This queue implements a byte-oriented interface. For example, it may be used when transmission boundaries can be ignored, e.g. for text-based communication like logging. Or this queue type may be used whenever it is trivial to restore packet boundaries, e.g. for transmitting streams of fixed-size data like streams of the same value type. For implementation details, see thebyte_queue
module.MsgQueue
: This queue implements a message-/packet-/datagram-based interface, i.e. it will preserve packet boundaries. On the reader-side, it will only deliver complete and CRC-checked packets. For example, this queue can be used whenever packet boundaries should be preserved, e.g. for variable-sized serialized data. This queue is implemented on top of theByteQueue
. It uses theByteQueue
to transfer data following a custom protocol format. For further implementation details, see themsg_queue
module.
Initially, this library has been developed as a simple efficient solution to communicate
between the Cortex-M4 and the Cortex-A7 on STM32MP1 microprocessors. It assumes that u32
can
be written and read atomically. This requirement holds true for many modern architectures, it
is fulfilled by the x86 and ARM architectures, both 32 bit and 64 bit.
Usage Examples
Single-Processor Single-Thread Byte Queue
type AlignedType = u32; // `u32` for proper alignment of the buffer regardless of its size
const LEN_SCALER: usize = core::mem::size_of::<AlignedType>();
let mut buffer = [0 as AlignedType; 128];
let mut queue = unsafe {
ByteQueue::create(buffer.as_mut_ptr() as *mut u8, buffer.len() * LEN_SCALER)
};
let tx = [1, 2, 3, 4];
queue.write_blocking(&tx);
let mut rx = [0u8; 4];
queue.consume_blocking(&mut rx);
assert_eq!(&tx, &rx);
Single-Processor Multi-Thread Byte Queue
A more realistic example involves creating a reader and a writer separately; although not shown
here, they may be moved to a different thread. Since the ByteQueue
is instantiated with a raw
pointer instead of a reference, the borrow checker does not track the lifetime of the
underlying buffer.
const LEN_U32_TO_U8_SCALER: usize = core::mem::size_of::<u32>();
let mut buffer = [123 as u32; 17];
let mut writer = unsafe {
ByteQueue::create(
buffer.as_mut_ptr() as *mut u8,
buffer.len() * LEN_U32_TO_U8_SCALER,
)
};
let mut reader = unsafe {
ByteQueue::attach(
buffer.as_mut_ptr() as *mut u8,
buffer.len() * LEN_U32_TO_U8_SCALER,
)
};
let tx = [1, 2, 3, 4];
writer.write_blocking(&tx);
let mut rx = [0u8; 4];
reader.consume_blocking(&mut rx);
assert_eq!(&tx, &rx);
Single-Procesor Message Queue
const DEFAULT_PREFIX: &'static [u8] = b"DEFAULT_PREFIX: "; // 16 byte long
let mut bq_buf = [0u32; 64];
let mut msg_queue = unsafe {
MsgQueue::new(
ByteQueue::create(bq_buf.as_mut_ptr() as *mut u8, bq_buf.len() * 4),
DEFAULT_PREFIX,
[0u8; 64 * 4]
)
};
let msg = b"Hello, World!";
let result = msg_queue.write_or_fail(msg);
assert!(result.is_ok());
let read_msg = msg_queue.read_or_fail().unwrap();
assert_eq!(read_msg, msg);
Shared-Memory Byte Queue
In general, an mmap
call is required to access the queue from a Linux system. This can be
done with the memmap
crate. The following example probably
panics when executed naively because access to /dev/mem
requires root
privileges. Additionally, the example memory region in use is probably not viable for this
queue on most systems. In the following example, ByteQueue::attach
is used, i.e. it is
assumed that another processor or process has already initialized a ByteQueue
in the
same memory region.
let shared_mem_start = 0x10048000; // example
let shared_mem_len = 0x00008000; // region
let dev_mem = std::fs::OpenOptions::new()
.read(true)
.write(true)
.open("/dev/mem")
.expect("Could not open /dev/mem, do you have root privileges?");
let mut mmap = unsafe {
memmap::MmapOptions::new()
.len(shared_mem_len)
.offset(shared_mem_start.try_into().unwrap())
.map_mut(&dev_mem)
.unwrap()
};
let mut channel = unsafe {
ByteQueue::attach(mmap.as_mut_ptr(), shared_mem_len)
};
Bi-Directional Shared-Memory Communication
In most inter-processor-communication scenarios, two queues will be required for bi-directional
communication. A single mmap
call is sufficient, the memory region can be split manually
afterwards:
let shared_mem_start = 0x10048000; // example
let shared_mem_len = 0x00008000; // region
let dev_mem = std::fs::OpenOptions::new()
.read(true)
.write(true)
.open("/dev/mem")
.expect("Could not open /dev/mem, do you have root privileges?");
let mut mmap = unsafe {
memmap::MmapOptions::new()
.len(shared_mem_len)
.offset(shared_mem_start.try_into().unwrap())
.map_mut(&dev_mem)
.unwrap()
};
let mut channel_write = unsafe {
ByteQueue::attach(mmap.as_mut_ptr(), shared_mem_len / 2)
};
let mut channel_read = unsafe {
ByteQueue::attach(mmap.as_mut_ptr().add(shared_mem_len / 2), shared_mem_len / 2)
};
License
Open Logistics Foundation License
Version 1.3, January 2023
See the LICENSE file in the top-level directory.
Contact
Fraunhofer IML Embedded Rust Group - embedded-rust@iml.fraunhofer.de
Dependencies
~115KB