### 25 releases

0.5.10 | Apr 8, 2024 |
---|---|

0.5.9 | Feb 15, 2024 |

0.5.5 | Jan 26, 2024 |

0.5.2 | Dec 20, 2023 |

0.1.0 | Feb 16, 2021 |

#**9** in Science

**84** downloads per month

Used in **2** crates

**MIT**license

**2MB**

14K
SLoC

# Biodivine Parametrised Boolean Networks

You can now also access the full functionality of

`from Python! The library is available as part of the AEON.py package.`

lib-param-bn

Rust library for working with (parametrised) Boolean networks. Supports:

- Read/Write Boolean network models from

,`.`aeon

, and`.`bnet

formats.`.`sbml - Basic static analysis, like monotonicity checking or network decomposition.
- Network parameters and partially unknown update functions.
- Fully symbolic asynchronous transition graphs.
- (Legacy) semi-symbolic asynchronous transition graphs.

To learn more, check out the documentation in our tutorial module.

### Utility binaries

Aside from the main library, there are a few basic binaries that are used for testing or benchmarking the functionality of the main library. At the moment, these binaries are not a part of the automated testing process as they need to be compiled with optimizations and generally utilize a completely different workflow than standard Cargo tests. In the future, these should be included as separate workflows, but for now, you can use them for manual testing in case you change something in the associated algorithms.

In general, all of these binaries assume a single argument and that is
a path to a Boolean network file in any of the supported formats (however,

is not recommended for tests that involve regulation monotonicity).
As such, you can run the following command to execute any of the tests
(using your desired `.`bnet

):`BINARY_NAME`

`cargo`` run`` --`release` --`bin BINARY_NAME` --`features print-progress` --`` MODEL_PATH`

Some algorithms can print intermediate progress when the

`feature flag is active. If you don't want this behaviour, remove`

print-progress`from the command.`

--features

#### Testing feedback-vertex-set and independent-cycles

Feedback vertex set problem asks to compute the set of vertices such that without these vertices, the graph becomes acyclic. A

minimumsuch set is typically desired.

Independent cycles problem asks to compute the set of non-intersecting cycles such that once removed, the graph becomes acyclic (i.e. every other cycle intersects with a member of this independent set). A

maximumsuch set is typically desired.

There are four binaries related to this feature:

, `check_fvs`

,
`check_ic`

, and `check_nfvs`

. The first two test greedy optimizing search
for feedback vertex sets and independent cycles. The other two then test
the negative variant of the problem (we don't test the positive variant
as functionally it is symmetrical to the negative case). Every binary
checks the validity of basic invariants that should hold for any
correct result.`check_nic`

As these are greedy, non-optimal algorithms, you should expect the algorithms to finish even for very large networks (e.g. >1000 variables). In particular, every model in the BBM benchmark can be processed in less than a second without issues.

#### Testing fixed-points and projected fixed-points

Fixed-point problem asks to identify every vertex (or combination of parametrisation and vertex for parametrised networks) with no outgoing transitions.

Projected fixed-points problem asks to identify valuations of the desired subset of variables for which there exists at least one fixed-point. In particular, fixed-point vertex

`is a vertex for which there exists a color`

x`(parameter valuation) such that`

c`is a fixed-point in`

x`. Symmetrically, color`

c`is a fixed-point color if there exists a vertex`

c`such that`

x`is a fixed-point in`

x`.`

c

There is one test binary

which compares the results of
several algorithms to ensure correctness. Additionally, there is
`check_fixed_points`

, `bench_fixed_points_naive`

, `bench_fixed_points_symbolic`

,
`bench_fixed_points_solver`

, and
`bench_fixed_points_symbolic_vertices`

which independently test the performance
of each algorithm.`bench_fixed_points_symbolic_colors`

Note that the naive algorithm often runs out of memory on non-trivial models.
As such,

can fail once the network is large or has many
parameters. That's why benchmarks are separated into individual binaries:
because some algorithms can and will time out or OOM.`check_fixed_points`

#### PBN to colour graph dump

To analyse (very) small networks, it can be useful to
dump them as explicit coloured graphs. There is a binary called

that does exactly this:`dump-graph`

`cargo`` build`` --`release
`./target/release/dump-graph`` MODEL_PATH ``>` graph_edges.txt

The binary takes a path to a model file as the first argument, and dumps the graph to standard output.

Since the graph is explicit, expect the output size to be unmanageable for PBNs with roughly >10 variables and >1000 valid parametrizations (with parametrizations being the bigger bottleneck).

You can test the functionality on

models which
should all be sufficiently small.`aeon_models /g2a_*.aeon`

#### Dependencies

~3.5–8MB

~148K SLoC