#parallel-processing #cross-platform #computing #parallel #thread #performance

core-compute

fast , simple and cross-platform GPGPU parallel computing library

8 unstable releases (3 breaking)

0.5.1 Aug 23, 2024
0.5.0 Aug 23, 2024
0.4.0 Aug 18, 2024
0.3.3 Aug 18, 2024
0.1.0 Aug 15, 2024

#57 in #computing

Download history 357/week @ 2024-08-11 525/week @ 2024-08-18 37/week @ 2024-08-25

919 downloads per month

MIT license

14KB
178 lines

core-compute

fast , simple and cross-platform parallel computing library

Special note

Getting started

  • first you will write your kernel code in shading languages which wgpu supports ( wgsl is recommended , default shader entry point is set to main so your kernel code in wgsl must contain main function )
  • create variable of type compute kernel and set x , y , z of it . think about x y z here like how you would in CUDA
  • create variable of type info with datas you want to send to gpu side (for now set bind and group to the same value think of it as what wgpu uses to find the right data) ( since v0.4.0 bind and group can be set to any value )
  • call compute!(compute_kernel ,&mut info , ...)
  • after computing , the compute macro will replace data field of infos with new data which gpu set to them
  • and done !

check out : https://docs.rs/core-compute/latest/core_compute/

for native bindings : https://github.com/SkillfulElectro/core-compute_native.git

Dependencies

~3–34MB
~519K SLoC