7 releases (breaking)
0.7.0 | Jul 21, 2024 |
---|---|
0.6.0 | Apr 30, 2024 |
0.5.0 | Apr 14, 2024 |
0.4.0 | Apr 13, 2024 |
0.1.0 | Mar 28, 2024 |
#627 in Command line utilities
35 downloads per month
740KB
922 lines
cai
- The fastest CLI tool for prompting LLMs
Features
- Build with Rust 🦀 for supreme performance and speed! 🏎️
- Support for models by Groq, OpenAI, Anthropic, and local LLMs. 📚
- Prompt several models at once. 🤼
- Syntax highlighting for better readability of code snippets. 🌈
Demo
Installation
cargo install cai
Usage
Before using Cai, an API key must be set up.
Simply execute cai
in your terminal and follow the instructions.
Cai supports the following APIs:
- Groq - Create new API key.
- OpenAI - Create new API key.
- Anthropic - Create new API key.
- Llamafile - Local Llamafile server running at http://localhost:8080.
- Ollama - Local Ollama server running at http://localhost:11434.
Afterwards, you can use cai
to run prompts directly from the terminal:
cai List 10 fast CLI tools
Or a specific model, like Anthropic's Claude Opus:
cai op List 10 fast CLI tools
Full help output:
$ cai help
Cai 0.6.0
The fastest CLI tool for prompting LLMs
Usage: cai [OPTIONS] [PROMPT]... [COMMAND]
Commands:
groq Groq [aliases: gr]
ll - Llama 3 shortcut (🏆 Default)
mi - Mixtral shortcut
openai OpenAI [aliases: op]
gp - GPT-4o shortcut
gm - GPT-4o mini shortcut
anthropic Anthropic [aliases: an]
cl - Claude Opus
so - Claude Sonnet
ha - Claude Haiku
llamafile Llamafile server hosted at http://localhost:8080 [aliases: lf]
ollama Ollama server hosted at http://localhost:11434 [aliases: ol]
all Simultaneously send prompt to each provider's default model:
- Groq Llama3
- Antropic Claude Sonnet 3.5
- OpenAI GPT-4o mini
- Ollama Llama3
- Llamafile
help Print this message or the help of the given subcommand(s)
Arguments:
[PROMPT]... The prompt to send to the AI model
Options:
-r, --raw Print raw response without any metadata
-j, --json Prompt LLM in JSON output mode
-h, --help Print help
Examples:
# Send a prompt to the default model
cai Which year did the Titanic sink
# Send a prompt to each provider's default model
cai all Which year did the Titanic sink
# Send a prompt to Anthropic's Claude Opus
cai anthropic claude-opus Which year did the Titanic sink
cai an claude-opus Which year did the Titanic sink
cai cl Which year did the Titanic sink
cai anthropic claude-3-opus-20240229 Which year did the Titanic sink
# Send a prompt to locally running Ollama server
cai ollama llama3 Which year did the Titanic sink
cai ol ll Which year did the Titanic sink
# Add data via stdin
cat main.rs | cai Explain this code
Related
- AI CLI - Get answers for CLI commands from ChatGPT. (TypeScript)
- AIChat - All-in-one chat and copilot CLI for 10+ AI platforms. (Rust)
- ja - CLI / TUI app to work with AI tools. (Rust)
- llm - Access large language models from the command-line. (Python)
- smartcat - Integrate LLMs in the Unix command ecosystem. (Rust)
- tgpt - AI chatbots for the terminal without needing API keys. (Go)
Dependencies
~24–43MB
~667K SLoC