3 releases
0.1.5alpha.0  Mar 25, 2023 

0.1.4alpha.0  Mar 3, 2023 
0.1.3alpha.0  Mar 1, 2023 
3.5MB
23K
SLoC
caffe2ideep
This Rust crate is part of the Caffe2 operator library translation and provides an implementation of the IDEEP backend.
The implementation is currently in the process of being translated from C++ to Rust and some function bodies may be incomplete.
This crate contains a variety of IDEEP operators,
such as IDEEPConvOp
, IDEEPConvTransposeOp
,
IDEEPSpatialBNOp
, IDEEPFullyConnectedOp
,
IDEEPPoolOp
, and many others. Additionally,
there are various functions for registering and
running IDEEP operators on specific devices, as
well as functions for copying data between CPU and
IDEEP tensors.
This crate is intended to provide a performant implementation of these operators for use in deep learning applications. The IDEEP backend can be particularly useful for training on CPUs, and this crate provides a Rust implementation of the backend for easy integration into Rustbased projects.
Note that some function bodies may still be in the process of translation, so not all functionality may be available.
53> what is ideep?
Intel Deep Learning Extension for Python (iDeep)
is a software library developed by Intel that
enables high performance deep learning on Intel
CPUs. It provides optimized deep learning
primitives that can be used to build and train
neural networks. The caffe2ideep
crate you
mentioned is a Rust translation of the Caffe2
operator library, with added support for iDeep
primitives to accelerate deep learning operations
on Intel CPUs.
54> could you teach me about it? how does it work? why would we choose to use it, for examples? what does it do? what are its strengths?
Intel Deep Neural Network Library (iDeep) is a deep learning library developed by Intel that provides highperformance deep neural networks inference acceleration on Intel CPUs. It supports the most popular deep learning frameworks including Caffe2, TensorFlow, MXNet, and PyTorch.
iDeep optimizes the execution of deep neural networks by using lowlevel hardware instructions provided by modern CPUs, such as Intel Advanced Vector Extensions (AVX), and by leveraging efficient algorithms specifically designed for the Intel architecture.
One of the main strengths of iDeep is its ability to execute deep neural networks efficiently on CPUs, which can provide a significant advantage over GPUs in some scenarios. This is especially true for data center and cloud deployments where CPUbased systems are often preferred for their costeffectiveness, scalability, and ease of deployment and management.
Another strength of iDeep is its integration with the popular deep learning frameworks. This makes it easy to incorporate iDeep into existing deep learning workflows without significant changes to code or infrastructure.
Overall, iDeep provides a powerful and flexible solution for executing deep neural networks on modern CPUs, and can be a valuable tool for researchers, data scientists, and engineers working on deep learning applications.
55> what are the relevant api entrypoints into ideep and how do we use them?
The iDeep library provides APIs for performing deep learning operations on CPU, GPU and FPGA devices. Here are some of the relevant API entrypoints into iDeep:

ideep::tensor
: This is the basic data structure in iDeep which represents a tensor. The tensor is used to store and manipulate multidimensional arrays in a contiguous block of memory. 
ideep::algorithm
: This is the class which provides APIs for deep learning operations such as convolution, pooling, batch normalization, etc. 
ideep::primitive_desc
: This is the class which represents the descriptor of a primitive operation in iDeep. A primitive is a basic building block of deep learning operations, and its descriptor contains all the necessary information to create and execute the operation. 
ideep::stream
: This is the class which represents a device stream for executing operations asynchronously on a device. A stream can be used to execute multiple operations concurrently on a device. 
ideep::engine
: This is the class which represents a device engine for executing operations. An engine provides a set of compute devices, and is responsible for creating and managing device streams.
To use iDeep, you would typically start by
creating a tensor object using the ideep::tensor
class. You would then create an algorithm object
to perform the desired deep learning operation,
and specify the input and output tensors for the
operation. You would then create a primitive
descriptor object to describe the operation, and
use this descriptor to create a primitive object
which can be executed on a device. Finally, you
would create a stream object to execute the
primitive operation asynchronously on a device.
Overall, iDeep provides a powerful and flexible set of APIs for performing deep learning operations on a variety of devices, making it a popular choice for highperformance deep learning applications.
56> thanks! could we keep drilling down into the details?
Sure, what specific aspects of the IDEEP API would you like to learn more about?
57> let's learn about ideep::tensor first
ideep::tensor
is a key class in the Intel Deep
Neural Network Acceleration (IDeep) library. It
represents a multidimensional array of elements,
similar to the numpy
array in Python or the
torch.Tensor
in PyTorch.
The ideep::tensor
class provides a variety of
methods for manipulating and performing operations
on tensors. These include methods for accessing
the shape of the tensor, setting and getting
individual elements, performing elementwise
arithmetic operations, and more.
One of the key strengths of ideep::tensor
is its
ability to perform these operations efficiently on
hardware that supports Intel's AVX512 instruction
set. This allows for significant speedups over
traditional CPUbased computations.
Here is an example of how to create an
ideep::tensor
object in Rust:
use ideep::*;
let shape = vec![2, 3];
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let x = tensor::f32::from_data(shape, &data).unwrap();
In this example, we create a 2x3 tensor with the
given data. The tensor::f32::from_data
function
is used to create a new ideep::tensor
object
from the given shape and data. If successful, it
returns an Option<tensor::f32>
, which we unwrap
using the .unwrap()
method.
We can then access the data and shape of the
tensor using the .data()
and .dims()
methods:
let tensor_data = x.data().to_vec();
let tensor_shape = x.dims().to_vec();
println!("Data: {:?}", tensor_data);
println!("Shape: {:?}", tensor_shape);
This will output:
Data: [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
Shape: [2, 3]
We can also perform elementwise arithmetic operations on tensors:
let y = tensor::f32::from_data(shape, &data).unwrap();
let z = x + y;
println!("Result: {:?}", z.data().to_vec());
This will output:
Result: [2.0, 4.0, 6.0, 8.0, 10.0, 12.0]
In this example, we create another ideep::tensor
object y
with the same shape and data as x
,
and then add them together using the +
operator. The resulting tensor z
contains the
elementwise sum of x
and y
.
This is just a simple example of what is possible
with ideep::tensor
. The library provides many
more operations and methods for working with
tensors, making it a powerful tool for deep
learning applications.
58> interesting, thanks! what is useful/unique about the intel avx512 instruction set?
The Intel AVX512 (Advanced Vector Extensions 512) is an instruction set extension that was introduced with Intel's Xeon Phi Knights Landing and SkylakeX CPUs. It offers a significant performance boost to certain types of computeintensive workloads, particularly in deep learning and other machine learning applications.
The main advantage of AVX512 is that it can perform vector operations on larger registers (512 bits, hence the name) compared to previous AVX instruction sets (256 bits). This means that larger amounts of data can be processed at once, leading to faster computation times. Additionally, AVX512 provides support for new operations that were not available in previous instruction sets, such as fused multiplyadd (FMA) operations, which can be used to speed up convolution operations commonly used in deep learning.
Overall, the AVX512 instruction set can significantly improve performance for certain types of computeintensive workloads, particularly those that involve large amounts of data processing, making it a valuable tool for deep learning applications.
59> so, we can use 512 bits, do we typically pack various dimensionalities or tile sizes into this amount of space? what are some examples?
Yes, packing various dimensionalities or tile sizes into the 512 bits of the AVX512 registers can be useful for performance optimization in deep learning applications. One common technique is to use the registers to hold multiple elements of the same dimension of a tensor, so that multiple operations can be applied to those elements simultaneously. For example, if we have a convolution operation that operates on a 3x3 tile of an image, we could load 8 rows of the image into a single 512bit register, so that the convolution operation could be applied to those 8 rows in parallel.
Another technique is to use the registers to hold multiple elements of different dimensions of a tensor, so that operations that involve those elements can be performed efficiently. For example, if we have a matrix multiplication operation that involves multiplying a large matrix by a small matrix, we could load the small matrix into a 512bit register and use it to perform a series of vectormatrix multiplications with the large matrix.
There are many other ways to use the AVX512 instruction set to optimize performance in deep learning applications, and the specific techniques used will depend on the particular operations being performed and the structure of the tensors involved.
60> this is a cpu architecture correct? next, let's talk about ideep::algorithm
Yes, AVX512 is a CPU architecture.
Moving on to ideep::algorithm, this is a module in the Intel Deep Learning Library (iDeep) that provides various algorithms for deep learning operations. It provides optimized implementations of the most commonly used neural network algorithms, such as convolution, pooling, and fully connected layers.
Some of the algorithms provided by ideep::algorithm include:

Convolution: This algorithm performs convolution of an input tensor with a set of filters, where the filters are learned during training. It provides optimized implementations for 1D, 2D, and 3D convolution.

Pooling: This algorithm performs max or average pooling of an input tensor. It provides optimized implementations for 1D, 2D, and 3D pooling.

Batch Normalization: This algorithm performs batch normalization of an input tensor to help improve the performance of deep neural networks.

Fully Connected: This algorithm performs a fully connected layer, also known as a dense layer, where each neuron is connected to every neuron in the previous layer.
The ideep::algorithm module provides optimized implementations of these algorithms for various hardware architectures, including Intel CPUs and GPUs. These implementations are designed to make efficient use of the hardware and provide high performance for deep learning workloads.
61> thanks, it seems like a standard set of algorithms. is there anything unique about it or something we should watch out for?
The ideep::algorithm
module provides a set of
highly optimized and scalable deep learning
algorithms, so it's very useful for building
highperformance deep learning applications. One
thing to keep in mind is that these algorithms are
highly optimized for Intel architectures and may
not perform as well on other
architectures. Additionally, the API may not be as
intuitive as some other deep learning frameworks,
so there may be a bit of a learning curve for
those new to the library. Overall, though, the
ideep::algorithm
module is a powerful tool for
building highperformance deep learning models.
62> thanks, next ideep::primitive_desc
ideep::primitive_desc
is a type in the Intel
oneAPI Deep Neural Network Library (oneDNN) that
represents the descriptor of a primitive
operation. A primitive operation is a basic
computation unit in oneDNN that implements
a specific algorithm on input data to produce
output data.
primitive_desc
provides a way to create
a specific instance of a primitive operation, by
specifying its input and output tensors, as well
as any other required parameters. It also allows
for querying information about the primitive, such
as its expected memory format, and provides
a mechanism for error handling.
For example, to create an instance of the
convolution operation, one would first create
a convolution_desc
object specifying the
convolution parameters such as the strides and
padding, then create
a convolution_primitive_desc
object by passing
in the convolution_desc
object and the
memory_desc
objects representing the input and
output tensors. Finally, one can create an
instance of the convolution operation by passing
in the convolution_primitive_desc
object and the
actual tensor data.
The primitive_desc
is an important part of the
oneDNN API because it allows for efficient
implementation of neural network models by
optimizing the computations based on the specific
hardware platform and input data. It also provides
a way to ensure compatibility and reproducibility
across different systems and platforms.
63> cool! could we go into more depth?
Sure! ideep::primitive_desc
is a class in the
Intel Deep Learning Library (iDeep) that provides
a description of a specific computation that needs
to be performed on a set of data. The description
contains information about the computation itself,
as well as the memory layout of the input and
output data.
One of the main benefits of
ideep::primitive_desc
is that it allows for
efficient memory allocation and management. The
class provides information about the optimal
memory layout for the input and output data, as
well as the amount of memory required for the
computation. This information can be used to
allocate memory in a way that is optimal for the
computation, reducing memory overhead and
improving performance.
Another benefit of ideep::primitive_desc
is that
it allows for dynamic optimization of the
computation. The class provides information about
the data types and sizes of the input and output
data, as well as the computational requirements of
the operation. This information can be used to
select the optimal implementation of the operation
based on the available hardware and other factors.
In summary, ideep::primitive_desc
is a key
component of the iDeep library that provides
a detailed description of a computation, including
the memory layout of the input and output
data. This information can be used to optimize
memory allocation and select the optimal
implementation of the computation based on
hardware and other factors.
64> ok next, ideep::stream
ideep::stream
is a class that encapsulates
a stream of computation operations on the
CPU. Streams are used to execute independent
operations asynchronously, in parallel. This can
result in better CPU utilization and increased
performance.
In general, streams allow for overlapping of compute and data transfer operations. For example, while the CPU is performing a computation on one set of data, it can concurrently transfer data from another part of memory to a different location. This can lead to more efficient use of system resources and faster overall computation.
Streams can be created using the
ideep::stream::create
method. Once created,
operations can be added to the stream using the
ideep::stream::enqueue
method. Operations added
to a stream can be executed in any order, as long
as they don't have data dependencies.
The ideep::stream
class also provides a method
to synchronize the stream,
ideep::stream::wait
. This method blocks the
calling thread until all operations on the stream
have completed.
Overall, ideep::stream
is an important class for
achieving highperformance computations in
a parallelized, asynchronous manner.
65> thanks! ideep::engine
ideep::engine
is an abstraction of
a computational device or system that can execute
computational primitives provided by Intel Math
Kernel Library for Deep Neural Networks
(MKLDNN). The engine
provides a way to
configure and manage the execution of the
primitives, such as specifying the device to use,
the number of threads, and the memory allocation
strategy.
An engine
is created with a backend_kind
that
specifies the device type to use, such as CPU or
GPU. The engine
also provides a default stream
and a default memory allocation mode that can be
changed by setting the configuration options.
The engine
class has a kind
method that
returns the type of the backend device that the
engine
represents. This can be useful when
creating primitive_desc
objects or when
determining if an engine
supports a certain type
of computation.
The engine
class is used extensively throughout
the Intel MKLDNN library, and provides a powerful
and flexible way to configure and manage the
execution of computational primitives on different
types of devices.
66> great! what are some real world use cases for ideep? to my understanding, it is a set of functional building blocks when optimizing deep nets for intel cpu architecture. is there anything else to it?
Yes, you are correct that iDeep is a set of functional building blocks for optimizing deep neural networks on Intel CPUs. Some realworld use cases of iDeep include image and speech recognition, natural language processing, recommendation systems, and many other applications that require deep learning techniques.
One of the key strengths of iDeep is its ability to take advantage of the advanced features of Intel CPUs, such as the AVX512 instruction set and other hardware optimizations, to achieve faster training and inference times. Additionally, iDeep provides a flexible API that allows developers to customize their neural network architectures and optimize them for specific hardware configurations.
Another benefit of iDeep is its compatibility with other deep learning frameworks such as TensorFlow, PyTorch, and Caffe2, allowing developers to easily integrate it into their existing deep learning workflows.
Dependencies
~33MB
~362K SLoC