8 releases (4 breaking)
Uses old Rust 2015
|0.6.0||Aug 10, 2018|
|0.5.0||Jul 30, 2017|
|0.4.2||Jul 7, 2017|
|0.4.1||Dec 2, 2016|
|0.2.0||Nov 29, 2016|
#216 in Science
751 downloads per month
Used in 3 crates
A fast parser for fastq.
This library can process fastq files at about the speed of the
wc -l (about 2GB/s on my laptop,
seqan runs at
about 150MB/s). It also makes it easy to distribute the
processing of fastq records to many cores, without losing much
of the performance.
See the documentation for details and examples.
We compare this library with the fastq parser in
the C++ library
seqan 2.2.0, with
kseq.h and with
We test 4 scenarios:
- A 2GB test file is uncompressed on a ramdisk. The program counts the number of records in the file.
- The test file lz4 compressed on disk, with an empty page cache. Again, the program should just count the number of records.
- The test file is lz4 compressed on disk with empty page cache, but the program sends records to a different thread. This thread counts the number of records.
- The same as scenario 3, but with gzip compression.
All measurements are taken with a 2GB test file (TODO describe!)
on a Haskwell i7-4510U @ 2GH. Each program is executed three
times (clearing the os page cache where appropriate) and the best
time is used. Libraries without native support for a compression
algorithm get the input via a pipe from
The C and C++ programs are compiled with gcc 6.2.1 with the
-O3 -march=native. All programs can be found in the
examples directory of this repository.
|ramdisk||lz4||lz4 + thread||gzip||gzip + thread|
Some notes from checking
fastqspend most of the time in
memchr(), but in contrast to
fastqhas to check that headers begin with
@and separator lines with
+and do some more bookeeping.
lz4 -duses a large buffer size (default 4MB), which seems to prevent the operating system from running
wcconcurrently when connected by a pipe.
fastqavoids this problem with an internal queue.
rust-biolooses some time copying data and validating utf8. The large slowdown in the threaded version stems from the fact, that it sends each record to the other thread individually. Each send (I use a
sync_channelfrom the rust stdlib) requires the use of synchronisation primitives, and three allocations for header, sequence and quality. Collecting records in a
Vecand sending only after a large number of them is available speeds this up from 150MB/s to 250MB/s.
seqanis busy allocating stuff, and uses (I think) a naive implementation of
memchr()to find line breaks.