#ollama #response #server #models #chat #logging #interface

bin+lib rtwo

CLI interface for Ollama written in Rust

4 releases

0.1.4 Jun 7, 2024
0.1.3 May 8, 2024
0.1.2 May 5, 2024
0.1.1 May 5, 2024

#590 in Command line utilities

Download history 106/week @ 2024-04-29 206/week @ 2024-05-06 3/week @ 2024-05-13 5/week @ 2024-05-20 4/week @ 2024-05-27 110/week @ 2024-06-03 12/week @ 2024-06-10

131 downloads per month

GPL-3.0 license

988 lines


Ollama CLI tool in Rust. Used to query and manage ollama server


  • Terminal based
  • Response highlights
  • Chat history
  • Download/Delete models from ollama server
  • Simple
  • Logging



Binary releases

You can download the pre-built binaries from the release page


rtwo can be installed from crates.io

cargo install rtwo

Build from source



  1. Clone the repo: git clone PATH_TO_REPO
  2. Build: cargo build --release

This will produce an binary executable file at target/release/rtwo that you can copy to a directory in your $PATH.


rtwo can be configured using a TOML configuration file. The file is located at:

  • Linux : $HOME/.config/rtwo/rtwo.toml
  • Mac : $HOME/Library/Application Support/rtwo/rtwo.toml
  • Windows : {FOLDERID_RoamingAppData}\rtwo\config\rtwo.toml.

The default configuration is:

host = "localhost"
port = 11434
model = "llama3:70b"
verbose = false
color = true
save = true
  • host: target host for ollama server
  • port: target port for ollama server
  • model: model to query
  • verbose: enable/disable verbose output from responses (See Usage)
  • color: enable/disable color output from responses
  • save: enable/disable saving responses to DB ($HOME/.local/share/rtwo/rtwo.db)


  -H, --host <HOST>
          Host address for ollama server. e.g.: localhost,, etc.

  -p, --port <PORT>
          Host port for ollama server. e.g.: 11434, 1776, etc.

  -m, --model <MODEL>
          Model name to query. e.g.: mistral, llama3:70b, etc.
          NOTE: If model is not available on HOST, rtwo will not automatically download the model to the HOST. Use
          "pull" [-P, --pull] to download the model to the HOST.

  -v, --verbose
          Enable verbose output. Prints: model, tokens in prompt, tokens in response, and time taken after response
          is rendered to user.
          	* Model: llama3:70b
          	* Tokens in prompt: 23
          	* Tokens in response: 216
          	* Time taken: 27.174

  -c, --color
          Enable color output.

  -s, --save
          Save conversation for recall (places conversation in DB)

  -l, --list
          List previous conversations

  -L, --listmodels
          List available models on ollama server (HOST:PORT)

  -r, --restore
          Select previous conversation from local storage and pick up where you left off. This restores the context
          from a saved conversation and prints the saved output.

  -d, --delete
          Delete previous conversations from local storage.
          NOTE: action is irreversible.

  -P, --pull <MODEL>
          Pull model to ollama server for use (downloads model on HOST). e.g.: llama3.

  -D, --delmodel <MODEL>
          Delete model from ollama server (deletes model on HOST). e.g.: llama2.

  -h, --help
          Print help (see a summary with '-h')

  -V, --version
          Print version


  • BTC: bc1qvx8q2xxwesw22yvrftff89e79yh86s56y2p9x9
  • XMR: 84t9GUWQVJSGxF8cbMtRBd67YDAHnTsrdWVStcdpiwcAcAnVy21U6RmLdwiQdbfsyu16UqZn6qj1gGheTMkHkYA4HbVN4zS


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.


~1M SLoC