2 unstable releases
0.2.0 | Oct 23, 2024 |
---|---|
0.1.0 | Feb 16, 2024 |
#211 in Embedded development
130 downloads per month
Used in memcom
44KB
585 lines
shared-mem-queue
This library implements simple single-writer single-reader queues, a
ByteQueue
for a byte-oriented streaming interface (like UART or TCP)
and a MsgQueue
for a message-/packet-/datagram-based interface (like
UDP). The queues can be used for
inter-processor-communication over a shared memory region. Initially, this has been developed as
a simple solution with minimal overhead to communicate between the Cortex-M4 and the Cortex-A7
on STM32MP1 microprocessors but it may also be useful in different scenarios though.
Implementation details
The underlying byte queue operates on a shared memory region and keeps track of a write- and a
read-pointer. To access both pointers from both processors, the write- and read-pointers
are stored in the shared memory region itself so the capacity of the queue
is 2*size_of::<usize>()
smaller than the memory region size.
The main contract here is that only the writer may write to the write-pointer, only the reader may write to the read-pointer, the memory region in front of the write-pointer and up to the read-pointer is owned by the writer, the memory region in front of the read-pointer and up to the write-pointer is owned by the reader. For initialization, both pointers have to be set to 0 at the beginning. This breaks the contract because the initializing processor needs to write both pointers. Therefore, this has to be done by processor A while it is guaranteed that processor B does not access the queue yet to prevent race conditions.
Because processor A has to initialize the byte queue and processor B should not
reset the write- and read-pointers, there are two methods for
initialization: create()
should be called by the first processor and
sets both pointers to 0, attach()
should be called by the second one.
The ByteQueue
implements both the write- and the read-methods but
each processor should have either the writing side or the reading side
assigned to it and must not call the other methods. It would also be
possible to have a SharedMemWriter
and a SharedMemReader
but this
design was initially chosen so that the queue can also be used as a simple
ring buffer on a single processor.
The MsgQueue
abstraction builds on top of the ByteQueue
for communication.
It handles variable-sized messages using the following format:
Field | Size |
---|---|
Message Prefix | Fixed size |
Data Size | <usize> in size |
Data | variable-sized - determined by the data size field |
CRC | 32 bits |
Usage Examples
Single-Processor Byte Queue
let mut buffer = [0u8; 128];
let mut queue = unsafe { ByteQueue::create(buffer.as_mut_ptr(), 100) };
let tx = [1, 2, 3, 4];
queue.blocking_write(&tx);
let mut rx = [0u8; 4];
queue.blocking_read(&mut rx);
assert_eq!(&tx, &rx);
A more realistic example involves creating a reader and a writer separately; although not shown here, they may be moved to a different thread:
let mut buffer = [0u8; 128];
let mut writer = unsafe { ByteQueue::create(buffer.as_mut_ptr(), 100) };
let mut reader = unsafe { ByteQueue::attach(buffer.as_mut_ptr(), 100) };
let tx = [1, 2, 3, 4];
writer.blocking_write(&tx);
let mut rx = [0u8; 4];
reader.blocking_read(&mut rx);
assert_eq!(&tx, &rx);
Single-Procesor Message Queue
const DEFAULT_PREFIX: &'static [u8] = b"DEFAULT_PREFIX: "; // 16 byte long
let mut bq_buf = [0u8; 128];
let mut msg_queue = unsafe {
MsgQueue::new(ByteQueue::create(bq_buf.as_mut_ptr(), 128), DEFAULT_PREFIX, [0u8; 128])
};
let msg = b"Hello, World!";
let result = msg_queue.nb_write_msg(msg);
assert!(result.is_ok());
let read_msg = msg_queue.nb_read_msg().unwrap();
assert_eq!(read_msg, msg);
Shared-Memory Queue
In general, an mmap
call is required to access the queue from a Linux system. This can be
done with the memmap
crate. The following example probably
panics when executed naively because access to /dev/mem
requires root
privileges. Additionally, the example region in use is probably not viable for this
queue on most systems:
let shared_mem_start = 0x10048000; // example
let shared_mem_len = 0x00008000; // region
let dev_mem = std::fs::OpenOptions::new()
.read(true)
.write(true)
.open("/dev/mem")
.expect("Could not open /dev/mem, do you have root privileges?");
let mut mmap = unsafe {
memmap::MmapOptions::new()
.len(shared_mem_len)
.offset(shared_mem_start.try_into().unwrap())
.map_mut(&dev_mem)
.unwrap()
};
let mut channel = unsafe {
ByteQueue::attach(mmap.as_mut_ptr(), shared_mem_len)
};
Bi-Directional Shared-Memory Communication
In most inter-processor-communication scenarios, two queues will be required for bi-directional
communication. A single mmap
call is sufficient, the memory region can be split manually
afterwards:
let shared_mem_start = 0x10048000; // example
let shared_mem_len = 0x00008000; // region
let dev_mem = std::fs::OpenOptions::new()
.read(true)
.write(true)
.open("/dev/mem")
.expect("Could not open /dev/mem, do you have root privileges?");
let mut mmap = unsafe {
memmap::MmapOptions::new()
.len(shared_mem_len)
.offset(shared_mem_start.try_into().unwrap())
.map_mut(&dev_mem)
.unwrap()
};
let mut channel_write = unsafe {
ByteQueue::attach(mmap.as_mut_ptr(), shared_mem_len / 2)
};
let mut channel_read = unsafe {
ByteQueue::attach(mmap.as_mut_ptr().add(shared_mem_len / 2), shared_mem_len / 2)
};
License
Open Logistics Foundation License
Version 1.3, January 2023
See the LICENSE file in the top-level directory.
Contact
Fraunhofer IML Embedded Rust Group - embedded-rust@iml.fraunhofer.de
Dependencies
~12KB