5 releases
| 0.2.3 | Sep 6, 2025 |
|---|---|
| 0.2.2 | Sep 6, 2025 |
| 0.2.1 | Sep 5, 2025 |
| 0.2.0 | Sep 5, 2025 |
| 0.1.0 | Sep 5, 2025 |
#423 in Machine learning
96 downloads per month
26KB
320 lines
snapshell (ss)
Minimal and snappy shell command generator with LLM/AI.
Alternative to GitHub Copilot ghcs, snapshell quickly generates shell commands using your preferred LLM/AI via OpenRouter.
Install
Install via crates.io (recommended):
curl https://sh.rustup.rs -sSf | sh
cargo install snapshell
Set up PATH and symlink (optional):
# The binary is installed to ~/.cargo/bin by default; make sure it's on your PATH:
export PATH="$HOME/.cargo/bin:$PATH"
# Optionally create a global symlink so the command is `ss`:
ln -s "$HOME/.cargo/bin/snapshell" /usr/local/bin/ss
Build
Build from source and symlink to ss:
cargo build --release
# Use sudo if /usr/local/bin requires elevated permissions
sudo ln -s "$(pwd)/target/release/snapshell" /usr/local/bin/ss
OpenRouter configuration
Before using snapshell with LLM features, configure OpenRouter:
- Export your API key (and optional model) for the session:
export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"
export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b" # optional override, e.g. meta-llama/llama-3.3-8b-instruct:free
- Or create a
.envfile based on.env.exampleand load it with your shell or a tool likedirenv:
cp .env.example .env
# edit .env and add your key and optional model:
# SNAPSHELL_OPENROUTER_API_KEY=your_openrouter_api_key
# SNAPSHELL_OPENROUTER_MODEL=openai/gpt-oss-120b
export $(cat .env | xargs)
Permanent setup (bash / zsh)
To make the key (and optional model) permanent, add the exports to your shell startup file.
For bash (~/.bashrc or ~/.profile):
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.bashrc
echo 'export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b"' >> ~/.bashrc
# or
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.profile
echo 'export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b"' >> ~/.profile
For zsh (~/.zshrc or ~/.zprofile):
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.zshrc
echo 'export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b"' >> ~/.zshrc
# or
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.zprofile
echo 'export SNAPSHELL_OPENROUTER_MODEL="openai/gpt-oss-120b"' >> ~/.zprofile
After editing, reload your shell or source the file:
source ~/.bashrc # or source ~/.zshrc
Quick usage
ss 'describe what shell command you want'- Generate a single-line shell command, print it, copy to macOS clipboard, and save to history.
ss -a 'chat with the model'- Enter interactive chat mode; you can continue asking follow-ups. Type
/exitor empty line to quit.
- Enter interactive chat mode; you can continue asking follow-ups. Type
ss -r 2 'use reasoning level 2'- Attach a reasoning hint to the model.
ss -m 'provider/model' 'ask'- Override the model (use provider-specific model strings like
groq/...orcerebras/...).
- Override the model (use provider-specific model strings like
ss -L 'ask'- Allow multiline script output instead of forcing one-liner.
ss -H- Print saved history entries.
Flags & examples
- Default single-line mode (default behavior):
ss "install openvino and show the command to quantize a tensorflow model"
- Force multiline output (for scripts):
ss -L "generate a bash script to backup ~/projects to /tmp/backup"
- Interactive chat mode (follow-ups):
ss -a "how to list modified rust files since yesterday?"
# After response, type follow-up questions at the `>` prompt
- Use a low-latency free model:
ss -m "meta-llama/llama-3.3-8b-instruct:free" "list files modified today"
- Override the default system instruction (applies to both modes unless more specific):
ss -s "You are an expert devops assistant. Output only shell commands." "describe what you want"
- Override single-line or multiline system instruction explicitly:
ss --system-single "Single-line-only instruction" "do X"
ss --system-multiline "Multiline-allowed instruction" -L "do Y"
- View history:
ss -H
Reasoning
snapshell supports an optional lightweight "reasoning" hint (OpenAI-style effort) you can request from the model.
-r, --reasoning <low|medium|high>— set the reasoning effort. Default:low.-S, --show-reasoning— when set, the model may append a trailing JSON object containing the model's short reasoning, printed on the line after the command as:
{"reasoning": "short one-sentence reason here"}
Notes:
- Reasoning is not printed by default; only enable it with
-Swhen you want an explanation. - The reasoning line is not copied to the clipboard and is not saved to history; only the generated command is copied/saved.
- Example:
ss -r high -S "why can't I install TensorRT on macOS?"
# output:
# (NOT ABLE TO ANSWER): TensorRT requires NVIDIA GPUs and is not available on macOS.
# {"reasoning": "TensorRT depends on NVIDIA GPU drivers not present on macOS"}
Environment variables
SNAPSHELL_OPENROUTER_API_KEY— API key for OpenRouter (required to call remote LLM).SNAPSHELL_SYSTEM— generic system instruction override.SNAPSHELL_SYSTEM_SINGLE— override for single-line mode.SNAPSHELL_SYSTEM_MULTILINE— override for multiline mode.
See .env.example for a sample env file.
OpenRouter integration
This tool is integrated with OpenRouter. Provide your OpenRouter API key via the environment variable SNAPSHELL_OPENROUTER_API_KEY.
You can control the model used in two ways (priority order):
- CLI: pass
-m 'provider/model'toss. - Environment: set
SNAPSHELL_OPENROUTER_MODEL(for exampleopenai/gpt-oss-120borgroq/fast-model).
If neither is set, snapshell falls back to the built-in default openai/gpt-oss-120b.
For the instant result, lowest-latency replies providers recommended are Groq and Cerebras when available, this provider use specialized inference hardware that can significantly speed up response times with 1K tokens/second.
You can enforce this provider in Open Router: Settings > Account > Allowed Providers > Select a provider, you can select both Groq and Cerebras. Also tick the 'Always enforce' checkbox.
History
History is saved as history.jsonl in your OS data dir and contains timestamp, prompt, and generated command. Use ss -H to view.
Notes
- Minimal, fast, designed to return only shell commands by default.
- If the model returns extra text, use
-s/--system-single/--system-multilineto tighten instructions.
Dependencies
~8–25MB
~295K SLoC