#opencv #computer-vision #yolo #neural-network #object-detection #networking #yolo-v8

od_opencv

Object detection utilities in Rust programming language for YOLO-based neural networks in OpenCV ecosystem

5 releases

0.1.5 Dec 26, 2023
0.1.4 Nov 27, 2023
0.1.3 Nov 27, 2023
0.1.2 Nov 27, 2023
0.1.1 Nov 17, 2023

#212 in Multimedia

38 downloads per month

MIT license

57KB
533 lines

Package

Object detection utilities in Rust programming language for YOLO-based neural networks in OpenCV ecosystem

This crate provides some basic structures and methods for solving object detections tasks via OpenCV's DNN module. Currently implemented and tested workflows:

Network type Darknet ONNX
YOLO v3 tiny ⚠️ (need to test)
YOLO v4 tiny ⚠️ (need to test)
YOLO v7 tiny ⚠️ (need to test)
YOLO v3 ⚠️ (need to test)
YOLO v4 ⚠️ (need to test)
YOLO v7 ⚠️ (need to test)
YOLO v8 n ❌ (is it even possible?)
YOLO v8 s ❌ (is it even possible?)
YOLO v8 m ❌ (is it even possible?)
YOLO v8 l ❌ (is it even possible?)
YOLO v8 x ❌ (is it even possible?)

Table of Contents

About

- Why?

Well, I just tired a bit of boilerplating (model initializing, postprocessing functions and etc.) in my both private and public projects.

- When it is usefull?

Well, there are several circumstances when you may need this crate:

  • You need to use YOLO as your neural network base;
  • You do not want use Pytorch / Tensorflow / Jax or any other DL/ML framework (someday it may happen to use pure ONNX without OpenCV features in this crate - PR's are welcome);
  • You need to use OpenCV's DNN module to initialize neural network;

- Why no YOLOv5?

I think there is a difference in postprocessing stuff between v8 and v5 versions. I need more time to investigate what should be done exactly to make v5 work.

- What OpenCV's version is tested?

I've tested it with v4.7.0. Rust bindings version: v0.66.0

- Are wrapper structures thread safe?

I'm not sure it is intended to be used in multiple threads (PR's are welcome). But I think you should use some queue mechanism if you want to give "async" acces to provided structs.

Prerequisites

Usage

There are some examples, but let me guide you step-by-step

  1. Add this crate to your's Cargo.toml:

    cargo add od_opencv
    
  2. Add OpenCV's bindings crate to Cargo.toml also:

    # I'm using 0.66 version
    cargo add opencv@0.66
    
  3. Download pretrained or use your own neural network.

    I will use pretrained weights from prerequisites section

  4. Import "basic" OpenCV stuff in yours main.rs file:

    use opencv::{
        core::{Scalar, Vector},
        imgcodecs::imread,
        imgcodecs::imwrite,
        imgproc::LINE_4,
        imgproc::rectangle,
        dnn::DNN_BACKEND_CUDA, // I will utilize my GPU to perform faster inference. Your way may vary
        dnn::DNN_TARGET_CUDA,
    };
    
  5. Import crate

    use od_opencv::{
        model_format::ModelFormat,
        // I'll use YOLOv8 by Ultralytics.
        // If you prefer traditional YOLO, then import it as:
        // model_classic::ModelYOLOClassic
        model_ultralytics::ModelUltralyticsV8
    };
    
  6. Prepare model

    // Define classes (in this case we consider 80 COCO labels)
    let classes_labels: Vec<&str> = vec!["person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "sofa", "pottedplant", "bed", "diningtable", "toilet", "tvmonitor", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"];
    
    // Define format for OpenCV's DNN module
    let mf = ModelFormat::ONNX;
    
    // Define model's input size
    let net_width = 640;
    let net_height = 640;
    
    // Initialize optional filters.
    // E.g.: if you do want to find only dogs and cats and you can't re-train neural network, 
    // then you can just place `vec![15, 16]` to filter dogs and cats (15 - index of `cat` in class labels, 16 - `dog`)
    // let class_filters: Vec<usize> = vec![15, 16];
    let class_filters: Vec<usize> = vec![];
    
    // Initialize model itself
    let mut model = ModelUltralyticsV8::new_from_file("pretrained/yolov8n.onnx", None, (net_width, net_height), mf, DNN_BACKEND_CUDA, DNN_TARGET_CUDA, class_filters).unwrap();
    
    // Read image into the OpenCV's Mat object
    // Define it as mutable since we are going to put bounding boxes onto it.
    let mut frame = imread("images/dog.jpg", 1).unwrap();
    
    // Feed forward image through the model
    let (bboxes, class_ids, confidences) = model.forward(&frame, 0.25, 0.4).unwrap();
    
    // Process results
    for (i, bbox) in bboxes.iter().enumerate() {
        // Place bounding boxes onto the image
        rectangle(&mut frame, *bbox, Scalar::from((0.0, 255.0, 0.0)), 2, LINE_4, 0).unwrap();
        // Debug output to stdin
        println!("Class: {}", classes_labels[class_ids[i]]);
        println!("\tBounding box: {:?}", bbox);
        println!("\tConfidences: {}", confidences[i]);
    }
    
    // Finally save the updated image to the file system
    imwrite("images/dog_yolov8_n.jpg", &frame, &Vector::new()).unwrap();
    
  7. You are good to go

    cargo run
    
  8. If anything is going wrong, feel free to open an issue

References

Dependencies

~1.7–3.5MB
~31K SLoC