2 releases
0.1.3 | Mar 14, 2023 |
---|---|
0.1.2 | Feb 13, 2023 |
0.1.0 |
|
0.0.11 |
|
0.0.4 |
|
#1410 in Command line utilities
81 downloads per month
8MB
2.5K
SLoC
fibertools-rs


fibertools-rs
a CLI tool for creating and interacting with fiberseq bam files.
Install 
fibertools-rs
is avalible through bioconda
and can be installed with the following command:
mamba install -c conda-forge -c bioconda fibertools-rs
Other installation methods are available in the INSTALL.md file.
Usage
ft --help
Subcommands for fibertools-rs
ft predict-m6a
Help page for predict-m6a. Predict m6A positions using HiFi kinetics data and encode the results in the MM and ML bam tags.
Adding nucleosome calls to the BAM files
To add nucleosome calls to the BAM files you can use the python package fibertools. See that repository for installation and instructions.
ft extract
Help page for extract. Extracts fiberseq data from a bam file into plain text.
ft extract --all
The extract all option is a special option that tries to extract all the fiberseq data into a tabular format. The following is an image of the output. Note that the column names will be preserved across different software versions (unless otherwise noted); however, the order may change and new columns may be added. Therefore, when loading the data (with pandas
e.g.) be sure to use the column names as opposed to indexes for manipulation.
ft center
Help page for center. Center fiberseq reads (bam) around reference position(s).
Read the fibertools docs
You can find the docs for the latest release here: https://docs.rs/fibertools-rs/latest/fibertools_rs/ or download from source and run:
cargo doc --open
and the docs will open in your browser.
TODO items
- Use new iterator for
ft extract
and group writes to try and improve the speed - Set filters for ML depending on the model used
- long format extract command
- Add
rustybam
stats to ftall
as an option - add option result to bamlift
- Add more test cases, learn about test modules in folders
- Test GPU support, see if I can simplify or statically link PyTorch.
- Improve progress bar for predict-m6a.
- Get size of bam, say how far we are through the bam in terms of MB/GB?
- Add unaligned, secondary, supplemental reads to the test bam.
- Detect GPU memory to set batch size dynamically.
Dependencies
~41MB
~879K SLoC