6 releases
Uses new Rust 2024
| 0.2.3 | Oct 25, 2025 |
|---|---|
| 0.2.2 | Oct 22, 2025 |
| 0.1.2 | Sep 2, 2025 |
#2196 in Command line utilities
5.5MB
1.5K
SLoC
hey
A minimal command-line application for quickly interacting with OpenAI's chat models, with support for streaming and syntax highlighting. The focus of hey is on ease of use, speed, and a pleasant terminal experience.

Features
- Streaming responses via async-openai
- Syntax highlighting via syntect
- Rich input editor via reedline with Vi mode, multi-line paste, and persistent history
- Command completions - tab completion for all commands (e.g.,
/h+ Tab →/help) - History search - use Ctrl+R to search through your input history
- Conversation history - quickly save, load, and view past conversations
- Customizable - Vi mode, themes, and more
Installation
Prerequisites
- Rust (for installation via cargo)
- OpenAI API key
Install
cargo install hey-rs
Setup
Set your OpenAI API key:
export OPENAI_API_KEY=your_api_key_here
Usage
Interactive Mode (REPL)
hey
Single Message
hey who was Ada Lovelace?
With Custom Prompt File
hey -p ~/path/to/prompt.txt
Commands
All commands support tab completion - just type / and press Tab to see available commands, or start typing a command (e.g., /h) and press Tab to complete it.
| Command | Description |
|---|---|
/exit, /quit, /q, /x |
Exit the REPL |
/clear, /c |
Clear screen |
/reset, /r |
Reset conversation |
/model, /m |
Select model |
/theme, /t |
Select theme |
/save, /s |
Save conversation |
/load, /l |
Load conversation |
/history |
View conversation history |
/help, /h |
Show help |
History Search
Press Ctrl+R to search through your input history. Start typing to filter previous inputs, and press Enter to use the selected command. Press Ctrl+R again to cycle through matching results.
Configuration
hey uses a TOML configuration file located at:
- Linux:
~/.config/hey/hey.toml - macOS:
~/Library/Application Support/hey/hey.toml - Windows:
%APPDATA%/hey/hey.toml
Common Options
# AI Model settings
model = "gpt-4o" # or "gpt-4o-mini", "gpt-3.5-turbo"
system_prompt = "You are a helpful coding assistant."
max_tokens = 2048
# Display
syntax_highlighting = true
theme = "ansi" # Run /theme to see all options
wrap_width = 100 # 0 to disable wrapping
ansi_colors = true
animations = true # Typewriter effect
greetings = true # Show "Hey!" and "Bye!" messages
# Editor
edit_mode = "emacs" # or "vi"
bracketed_paste = true
reedline_history = true # Save input history
history_max_size = 1000 # Max number of inputs to save
# Files
conversations_folder = "~/.hey" # Where to save conversations
All Configuration Options
| Option | Default | Description |
|---|---|---|
system_prompt |
"You are a helpful assistant." |
Initial context for AI |
model |
"gpt-4o" |
OpenAI model to use |
max_tokens |
2048 |
Response length limit |
enter_repl |
false |
Force REPL mode with CLI message |
wrap_width |
100 |
Text wrapping width (0 = disabled) |
syntax_highlighting |
true |
Code syntax highlighting |
theme |
"ansi" |
Highlighting color scheme |
ansi_colors |
true |
Colored terminal output |
animations |
true |
Typewriter text effect |
greetings |
true |
Show greeting messages ("Hey!" and "Bye!") |
edit_mode |
"emacs" |
Input editor mode ("emacs" or "vi") |
bracketed_paste |
true |
Multi-line paste support |
conversations_folder |
"./" |
Directory for saved conversations |
reedline_history |
true |
Persist input history across sessions |
history_max_size |
1000 |
Maximum input history size |
See defaults.toml for detailed documentation of all options.
Example Configurations
Minimal setup:
model = "gpt-4o-mini"
system_prompt = "You are a helpful coding assistant."
Plain text output:
syntax_highlighting = false
ansi_colors = false
wrap_width = 0
Vi user setup:
edit_mode = "vi"
theme = "base16"
conversations_folder = "~/Documents/hey-conversations"
Dependencies
~36–75MB
~1M SLoC