|0.1.3||Dec 3, 2020|
|0.1.0||Nov 28, 2020|
#724 in Science
A Pipeline for Building Genomic Annotation Datasets for Deep Learning
This is a pipeline for creating HDF5 input to Keras from genomic regions and annotations in Rust. It is a (somewhat) drop-in replacement for Basset's preprocessing pipeline, intended to transform a list of BED files into annotated one-hot encoded sequences for use in a deep learning model. The input and output of both pipelines should be similar, with this one being substantially faster for larger datasets.
To install, use
cargo or alternatively build from source.
cargo install genomic_interval_pipeline
Building from source
Ensure that you have installed
cargo and Rust on your system, then clone this repository.
git clone email@example.com:Chris1221/genomic_interval_pipeline.rs.git cd genomic_interval_pipeline.rs
cargo to build the executable. It should figure out all the dependencies for you.
cargo build --release
The binary will be in
You must provide:
- A newline seperated list of gzipped BED files.
- Path to the reference genome. This must be compressed with
bgzipand indexed by
An example of the first can be found in
To create the relevant reference genome is straightforward.
- Download your reference genome of choice, for example hg19. Here, I just used the UCSC genome browser.
- Compress your reference genome with
bgzip(if this is already true, skip this step.)
gunzip hg19.fa.gz bgzip hg19.fa
- Index with
samtools faidx hg19.fa.gz
Running the pipeline
Invoke the binary with the paths to the metadata of BED files and the reference genome (you don't have to specify where the index is).
genomic_interval_pipeline -i data/metadata.txt -f hg19.fa.gz -o small_dataset
This will create your dataset at
||String||Path to a newline seperated list of bed files to process.|
||String||Path to faidx indexed,
||String||Path to the output
||Number||Minimum overlap required to merge segments (default:
||Bool||Perform multiclass learning rather than multilabel (i.e. exclude cases where multiple cell types are annotated, only writing unique values) (defaul:
||Number||Standardised length of regions (default:
||String||Comma seperated list of chromosomes to use in the test set (default:
||String||Comma seperated list of chromosomes to use in the validation set (default:
||String||Level of logging (default:
HDF5 files are essentially directories of data. There are six tables within the dataset corresponding to the training, test, and validation sequences and their labels.
Sequences are 3D arrays with dimenions
(batch, length, 4) where length is optionally specified when building th dataset and refers to the standardized length of the segments.
Labels are 2D arrays with dimensions
(batch, number_of_labels) where number of labels is the length of the metadata file. You can easily recode this dataset inside the
HDF5 file for more bespoke training outputs.
Labels are given to bed file sequentially (as a
SERIAL ID would be given in a SQL table), however this behaviour can be overriden. Instead of one column metadata file, you may have a two column metadata file seperated by spaces. The second column is the numeric label of this file.
path/to/file1.bed.gz numeric_label_1 path/to/file2/bed.gz numeric_label_2
See the example in
Using the dataset in
You can use this data in your own neural network with the TensorFlow I/O API. Here is an example in Python.
import tensorflow as tf import tensorflow_io as tfio dataset = "small_dataset.h5" train_x = tfio.Dataset.from_hdf5(dataset, "training_sequences") train_y = tfio.Dataset.from_hdf5(dataset, "training_labels") train_dataset = tf.data.Dataset.zip((train_x, train_y))
train_dataset (and similarly test and validation) to