4 releases

0.1.4 Jun 7, 2024
0.1.3 May 8, 2024
0.1.2 May 5, 2024
0.1.1 May 5, 2024

#958 in Command line utilities

Download history 16/week @ 2024-09-29

235 downloads per month

GPL-3.0 license

43KB
988 lines

rtwo

Ollama CLI tool in Rust. Used to query and manage ollama server

Features

  • Terminal based
  • Response highlights
  • Chat history
  • Download/Delete models from ollama server
  • Simple
  • Logging

Demo


Installation

Binary releases

You can download the pre-built binaries from the release page

crates.io

rtwo can be installed from crates.io

cargo install rtwo

Build from source

Prerequisites:

Build

  1. Clone the repo: git clone PATH_TO_REPO
  2. Build: cargo build --release

This will produce an binary executable file at target/release/rtwo that you can copy to a directory in your $PATH.


Configuration

rtwo can be configured using a TOML configuration file. The file is located at:

  • Linux : $HOME/.config/rtwo/rtwo.toml
  • Mac : $HOME/Library/Application Support/rtwo/rtwo.toml
  • Windows : {FOLDERID_RoamingAppData}\rtwo\config\rtwo.toml.

The default configuration is:

host = "localhost"
port = 11434
model = "llama3:70b"
verbose = false
color = true
save = true
  • host: target host for ollama server
  • port: target port for ollama server
  • model: model to query
  • verbose: enable/disable verbose output from responses (See Usage)
  • color: enable/disable color output from responses
  • save: enable/disable saving responses to DB ($HOME/.local/share/rtwo/rtwo.db)

Usage

  -H, --host <HOST>
          Host address for ollama server. e.g.: localhost, 192.168.1.5, etc.

  -p, --port <PORT>
          Host port for ollama server. e.g.: 11434, 1776, etc.

  -m, --model <MODEL>
          Model name to query. e.g.: mistral, llama3:70b, etc.
          NOTE: If model is not available on HOST, rtwo will not automatically download the model to the HOST. Use
          "pull" [-P, --pull] to download the model to the HOST.

  -v, --verbose
          Enable verbose output. Prints: model, tokens in prompt, tokens in response, and time taken after response
          is rendered to user.
          Example:
          	* Model: llama3:70b
          	* Tokens in prompt: 23
          	* Tokens in response: 216
          	* Time taken: 27.174

  -c, --color
          Enable color output.

  -s, --save
          Save conversation for recall (places conversation in DB)

  -l, --list
          List previous conversations

  -L, --listmodels
          List available models on ollama server (HOST:PORT)

  -r, --restore
          Select previous conversation from local storage and pick up where you left off. This restores the context
          from a saved conversation and prints the saved output.
          Interactive

  -d, --delete
          Delete previous conversations from local storage.
          NOTE: action is irreversible.
          Interactive

  -P, --pull <MODEL>
          Pull model to ollama server for use (downloads model on HOST). e.g.: llama3.

  -D, --delmodel <MODEL>
          Delete model from ollama server (deletes model on HOST). e.g.: llama2.

  -h, --help
          Print help (see a summary with '-h')

  -V, --version
          Print version

Donate

  • BTC: bc1qvx8q2xxwesw22yvrftff89e79yh86s56y2p9x9
  • XMR: 84t9GUWQVJSGxF8cbMtRBd67YDAHnTsrdWVStcdpiwcAcAnVy21U6RmLdwiQdbfsyu16UqZn6qj1gGheTMkHkYA4HbVN4zS

License

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.

Dependencies

~56MB
~1M SLoC