9 releases
new 0.4.8 | Jul 3, 2025 |
---|---|
0.4.7 | Jul 1, 2025 |
0.4.6 | Jun 29, 2025 |
#10 in Visualization
928 downloads per month
5MB
3.5K
SLoC
๐๏ธ Perspt: Your Terminal's Window to the AI World ๐ค
"The keyboard hums, the screen aglow,
AI's wisdom, a steady flow.
Will robots take over, it's quite the fright,
Or just provide insights, day and night?
We ponder and chat, with code as our guide,
Is AI our helper or our human pride?"
Perspt (pronounced "perspect," short for Personal Spectrum Pertaining Thoughts) is a high-performance command-line interface (CLI) application that gives you a peek into the mind of Large Language Models (LLMs). Built with Rust for speed and reliability, it allows you to chat with various AI models from multiple providers directly in your terminal using the modern genai crate's unified API.
๐ฏ Why Perspt?
- ๐ Latest Model Support: Built on the modern
genai
crate with support for latest reasoning models like Google's Gemini 2.5 Pro and OpenAI's o1-mini - โก Real-time Streaming: Ultra-responsive streaming responses with proper reasoning chunk handling
- ๐ก๏ธ Rock-solid Reliability: Comprehensive panic recovery and error handling that keeps your terminal safe
- ๐จ Beautiful Interface: Modern terminal UI with markdown rendering and smooth animations
- ๐ค Zero-Config Startup: Automatic provider detection from environment variables - just set your API key and go!
- ๐ง Flexible Configuration: CLI arguments, environment variables, and JSON config files all work seamlessly
โจ Features
- ๐จ Interactive Chat Interface: A colorful and responsive chat interface powered by Ratatui with smooth scrolling and custom markdown rendering.
- ๐ฅ๏ธ Simple CLI Mode: New minimal command-line mode for direct Q&A without TUI overlay - perfect for scripting, accessibility, or Unix-style workflows.
- โก Advanced Streaming: Real-time streaming of LLM responses with support for reasoning chunks and proper event handling.
- ๐ค Automatic Provider Detection: Zero-config startup that automatically detects and uses available providers based on environment variables (set
OPENAI_API_KEY
,ANTHROPIC_API_KEY
, etc. and just runperspt
!). - ๐ Latest Provider Support: Built on the modern
genai
crate with support for cutting-edge models:- OpenAI (GPT-4, GPT-4-turbo, GPT-3.5-turbo, GPT-4o, GPT-4o-mini, GPT-4.1, o1-mini, o1-preview, o3-mini, and more)
- Anthropic (Claude-3 Opus, Sonnet, Haiku, Claude-3.5 Sonnet, Claude-3.5 Haiku, and more)
- Google Gemini (Gemini-1.5-pro, Gemini-1.5-flash, Gemini-2.0-flash, Gemini-2.5-Pro, and more)
- Groq (Llama models with ultra-fast inference, Mixtral, Gemma, and more)
- Cohere (Command models, Command-R, Command-R+, and more)
- XAI (Grok models and more)
- DeepSeek (DeepSeek-chat, DeepSeek-reasoner, and more)
- Ollama (Local models: Llama, Mistral, Code Llama, Vicuna, and custom models)
- ๐ง Robust CLI Options: Full command-line support for API keys, models, provider types, and the new simple CLI mode that actually work.
- ๐ Flexible Authentication: API keys work via CLI arguments, environment variables, or configuration files.
- โ๏ธ Smart Configuration: Intelligent configuration loading with fallbacks and validation.
- ๐ Input Queuing: Type and submit new questions even while the AI is generating a previous response.
- ๐พ Conversation Export: Save your chat conversations to text files using the
/save
command with timestamped filenames. - ๐ Enhanced UI Feedback: Visual indicators for processing states and improved responsiveness.
- ๐ Custom Markdown Parser: Built-in markdown parser optimized for terminal rendering with proper streaming buffer management.
- ๐ก๏ธ Graceful Error Handling: Robust handling of network issues, API errors, edge cases with user-friendly error messages.
- ๐ Extensive Documentation: Comprehensive code documentation and user guides.
๐ Getting Started
๐ค Zero-Config Automatic Provider Detection
NEW! Perspt now features intelligent automatic provider detection. Simply set an environment variable for any supported provider, and Perspt will automatically detect and use it - no additional configuration needed!
Priority Detection Order:
- OpenAI (
OPENAI_API_KEY
) - Anthropic (
ANTHROPIC_API_KEY
) - Google Gemini (
GEMINI_API_KEY
) - Groq (
GROQ_API_KEY
) - Cohere (
COHERE_API_KEY
) - XAI (
XAI_API_KEY
) - DeepSeek (
DEEPSEEK_API_KEY
) - Ollama (no API key needed - auto-detected if running)
Quick Start Examples:
# Option 1: OpenAI (will be auto-detected and used)
export OPENAI_API_KEY="sk-your-openai-key"
./target/release/perspt # That's it! Uses OpenAI with gpt-4o-mini
# Option 2: Anthropic (will be auto-detected and used)
export ANTHROPIC_API_KEY="sk-ant-your-key"
./target/release/perspt # Uses Anthropic with claude-3-5-sonnet-20241022
# Option 3: Google Gemini (will be auto-detected and used)
export GEMINI_API_KEY="your-gemini-key"
./target/release/perspt # Uses Gemini with gemini-1.5-flash
# Option 4: Ollama (no API key needed!)
# Just make sure Ollama is running: ollama serve
./target/release/perspt # Auto-detects Ollama if no other providers found
What happens behind the scenes:
- Perspt scans your environment variables for supported provider API keys
- Automatically selects the first available provider (based on priority order)
- Sets appropriate default model for the detected provider
- Starts up immediately - no config files or CLI arguments needed!
When no providers are detected: If no API keys are found, Perspt shows helpful setup instructions:
โ No LLM provider configured!
To get started, either:
1. Set an environment variable for a supported provider:
โข OPENAI_API_KEY=sk-your-key
โข ANTHROPIC_API_KEY=sk-ant-your-key
# ... (shows all supported providers)
2. Use command line arguments:
perspt --provider-type openai --api-key sk-your-key
3. Create a config.json file with provider settings
Read the perspt book - This illustrated guide walks through the project and explains key Rust concepts
๐ ๏ธ Prerequisites
- Rust: Ensure you have the Rust toolchain installed. Get it from rustup.rs.
- ๐ LLM API Key: For cloud providers, you'll need an API key from the respective provider:
- OpenAI: Get yours at platform.openai.com (supports o1-mini, o1-preview, o3-mini, GPT-4.1)
- Anthropic: Get yours at console.anthropic.com
- Google Gemini: Get yours at aistudio.google.com (supports Gemini 2.5 Pro)
- Groq: Get yours at console.groq.com
- Cohere: Get yours at dashboard.cohere.com
- XAI: Get yours at console.x.ai
- DeepSeek: Get yours at platform.deepseek.com
- Ollama: For local models, install Ollama from ollama.ai (no API key needed)
๐ฆ Installation
-
Clone the Repository:
git clone <repository-url> # Replace <repository-url> with the actual URL cd perspt
-
Build the Project:
cargo build --release
Find the executable in the
target/release
directory. -
Quick Test (Optional):
# Test with OpenAI (replace with your API key) ./target/release/perspt --provider-type openai --api-key sk-your-key --model gpt-4o-mini # Test with Google Gemini (supports latest models) ./target/release/perspt --provider-type google --api-key your-key --model gemini-2.0-flash-exp # Test with Anthropic ./target/release/perspt --provider-type anthropic --api-key your-key --model claude-3-5-sonnet-20241022
โ๏ธ Configuration
Perspt can be configured using a config.json
file or command-line arguments. Command-line arguments override config file settings.
๐ Config File (config.json
)
Create a config.json
in the root directory of the project, or specify a custom path using the -c
CLI argument.
Example config.json
:
{
"providers": {
"openai": "https://api.openai.com/v1",
"anthropic": "https://api.anthropic.com",
"gemini": "https://generativelanguage.googleapis.com/v1beta/",
"groq": "https://api.groq.com/openai/v1",
"cohere": "https://api.cohere.com/v1",
"xai": "https://api.x.ai/v1",
"deepseek": "https://api.deepseek.com/v1",
"ollama": "http://localhost:11434/v1"
},
"provider_type": "openai",
"default_provider": "openai",
"default_model": "gpt-4o-mini",
"api_key": "your-api-key-here"
}
Configuration Fields:
providers
(Optional): A map of provider profile names to their API base URLs.provider_type
: The type of LLM provider to use.- Valid values:
"openai"
,"anthropic"
,"gemini"
,"groq"
,"cohere"
,"xai"
,"deepseek"
,"ollama"
- Valid values:
default_provider
(Optional): The name of the provider profile from theproviders
map to use by default.default_model
: The model name to use (e.g., "gpt-4o-mini", "claude-3-5-sonnet-20241022", "gemini-1.5-flash").api_key
: Your API key for the configured provider.
Example configurations for different providers:
OpenAI:
{
"provider_type": "openai",
"default_model": "gpt-4o-mini",
"api_key": "sk-your-openai-api-key"
}
Anthropic:
{
"providers": {
"anthropic": "https://api.anthropic.com"
},
"api_key": "YOUR_ANTHROPIC_API_KEY",
"default_model": "claude-3-5-sonnet-20241022",
"provider_type": "anthropic"
}
Google Gemini:
{
"providers": {
"gemini": "https://generativelanguage.googleapis.com/v1beta/"
},
"api_key": "YOUR_GEMINI_API_KEY",
"default_model": "gemini-1.5-flash",
"provider_type": "gemini"
}
Groq:
{
"providers": {
"groq": "https://api.groq.com/openai/v1"
},
"api_key": "YOUR_GROQ_API_KEY",
"default_model": "llama-3.1-70b-versatile",
"provider_type": "groq"
}
Cohere:
{
"providers": {
"cohere": "https://api.cohere.com/v1"
},
"api_key": "YOUR_COHERE_API_KEY",
"default_model": "command-r-plus",
"provider_type": "cohere"
}
XAI (Grok):
{
"providers": {
"xai": "https://api.x.ai/v1"
},
"api_key": "YOUR_XAI_API_KEY",
"default_model": "grok-beta",
"provider_type": "xai"
}
DeepSeek:
{
"providers": {
"deepseek": "https://api.deepseek.com/v1"
},
"api_key": "YOUR_DEEPSEEK_API_KEY",
"default_model": "deepseek-chat",
"provider_type": "deepseek"
}
Ollama (Local Models):
{
"providers": {
"ollama": "http://localhost:11434/v1"
},
"api_key": "not-required",
"default_model": "llama3.2",
"provider_type": "ollama"
}
โจ๏ธ Command-Line Arguments
The CLI now has fully working argument support with proper API key handling:
-c <FILE>
,--config <FILE>
: Path to a custom configuration file.-p <TYPE>
,--provider-type <TYPE>
: Specify the provider type (openai
,anthropic
,gemini
,groq
,cohere
,xai
,deepseek
,ollama
).-k <API_KEY>
,--api-key <API_KEY>
: Your API key for the LLM provider (works properly now!).-m <MODEL>
,--model <MODEL>
: The model name (e.g.,gpt-4o-mini
,o1-mini
,claude-3-5-sonnet-20241022
,gemini-2.5-pro
,llama3.2
).--provider <PROVIDER_PROFILE>
: Choose a pre-configured provider profile from yourconfig.json
'sproviders
map.--list-models
: List available models for the configured provider.
โ Fixed Issues:
- CLI API keys now properly set environment variables for the genai client
- Model validation works correctly before starting the UI
- Provider type selection is properly handled
- No more "API key only works as environment variable" issues
Run target/release/perspt --help
for a full list.
๐ Usage Examples
OpenAI (including latest reasoning models):
# Latest GPT-4o-mini (fast and efficient)
target/release/perspt --provider-type openai --api-key YOUR_OPENAI_API_KEY --model gpt-4o-mini
# GPT-4.1 (enhanced capabilities)
target/release/perspt --provider-type openai --api-key YOUR_OPENAI_API_KEY --model gpt-4.1
# OpenAI o1-mini (reasoning model)
target/release/perspt --provider-type openai --api-key YOUR_OPENAI_API_KEY --model o1-mini
# OpenAI o1-preview (advanced reasoning)
target/release/perspt --provider-type openai --api-key YOUR_OPENAI_API_KEY --model o1-preview
# OpenAI o3-mini (latest reasoning model)
target/release/perspt --provider-type openai --api-key YOUR_OPENAI_API_KEY --model o3-mini
Google Gemini (including latest models):
# Gemini 2.0 Flash (latest fast model)
target/release/perspt --provider-type gemini --api-key YOUR_GEMINI_API_KEY --model gemini-2.0-flash-exp
# Gemini 1.5 Pro (balanced performance)
target/release/perspt --provider-type gemini --api-key YOUR_GEMINI_API_KEY --model gemini-1.5-pro
Anthropic:
target/release/perspt --provider-type anthropic --api-key YOUR_ANTHROPIC_API_KEY --model claude-3-5-sonnet-20241022
Groq (Ultra-fast inference):
# Llama models with lightning-fast inference
target/release/perspt --provider-type groq --api-key YOUR_GROQ_API_KEY --model llama-3.1-70b-versatile
# Mixtral model
target/release/perspt --provider-type groq --api-key YOUR_GROQ_API_KEY --model mixtral-8x7b-32768
Cohere:
# Command-R+ (latest reasoning model)
target/release/perspt --provider-type cohere --api-key YOUR_COHERE_API_KEY --model command-r-plus
# Command-R (balanced performance)
target/release/perspt --provider-type cohere --api-key YOUR_COHERE_API_KEY --model command-r
XAI (Grok):
target/release/perspt --provider-type xai --api-key YOUR_XAI_API_KEY --model grok-beta
DeepSeek:
# DeepSeek Chat
target/release/perspt --provider-type deepseek --api-key YOUR_DEEPSEEK_API_KEY --model deepseek-chat
# DeepSeek Reasoner
target/release/perspt --provider-type deepseek --api-key YOUR_DEEPSEEK_API_KEY --model deepseek-reasoner
Ollama (Local Models - No API Key Required!):
# First, make sure Ollama is running locally:
# ollama serve
# Llama 3.2 (3B - fast and efficient)
target/release/perspt --provider-type ollama --model llama3.2
# Llama 3.1 (8B - more capable)
target/release/perspt --provider-type ollama --model llama3.1:8b
# Code Llama (for coding tasks)
target/release/perspt --provider-type ollama --model codellama
# Mistral (7B - general purpose)
target/release/perspt --provider-type ollama --model mistral
# Custom model (if you've imported one)
target/release/perspt --provider-type ollama --model your-custom-model
Using environment variables:
# Set once, use multiple times
export OPENAI_API_KEY="your-key-here"
export GOOGLE_API_KEY="your-gemini-key-here"
export GROQ_API_KEY="your-groq-key-here"
# Now you can skip the --api-key argument
target/release/perspt --provider-type openai --model gpt-4o-mini
target/release/perspt --provider-type gemini --model gemini-2.0-flash-exp
target/release/perspt --provider-type groq --model llama-3.1-70b-versatile
# Ollama doesn't need API keys
target/release/perspt --provider-type ollama --model llama3.2
Using a config file:
target/release/perspt --config my_config.json
(Ensure my_config.json
is correctly set up with provider_type
, api_key
, and default_model
).
๐ฏ Model Discovery & Validation
Perspt uses the modern genai crate for robust model handling and validation:
# List OpenAI models (including o1-mini, o1-preview, o3-mini, GPT-4.1)
target/release/perspt --provider-type openai --api-key YOUR_API_KEY --list-models
# List Google models (including Gemini 2.5 Pro, 2.0 Flash)
target/release/perspt --provider-type gemini --api-key YOUR_API_KEY --list-models
# List Anthropic models
target/release/perspt --provider-type anthropic --api-key YOUR_API_KEY --list-models
# List Groq models (ultra-fast inference)
target/release/perspt --provider-type groq --api-key YOUR_API_KEY --list-models
# List Cohere models
target/release/perspt --provider-type cohere --api-key YOUR_API_KEY --list-models
# List XAI models
target/release/perspt --provider-type xai --api-key YOUR_API_KEY --list-models
# List DeepSeek models
target/release/perspt --provider-type deepseek --api-key YOUR_API_KEY --list-models
# List Ollama models (local, no API key needed)
target/release/perspt --provider-type ollama --list-models
โ Enhanced Model Support:
- Real Model Validation: Models are validated before starting the UI to prevent runtime errors
- Latest Model Support: Built on genai crate which supports cutting-edge models like o1-mini and Gemini 2.5 Pro
- Proper Error Handling: Clear error messages when models don't exist or aren't available
- Reasoning Model Support: Full support for models with reasoning capabilities and special event handling
๏ฟฝ๏ธ Simple CLI Mode - New in v0.4.5!
NEW FEATURE: Perspt now includes a minimal command-line interface mode for direct Q&A without the TUI overlay. Perfect for scripting, accessibility needs, or users who prefer Unix-style command prompts.
๐ฏ When to Use Simple CLI Mode
- ๐ค Scripting & Automation: Integrate Perspt into shell scripts and workflows
- โฟ Accessibility: Simple, scrolling console output for users with accessibility needs
- ๐ Logging & Auditing: Keep detailed records of AI interactions for documentation
- โก Quick Queries: Fast, lightweight interface for simple questions
- ๐ง Unix Philosophy: Clean, composable tool that follows Unix conventions
๐ Simple CLI Usage
Basic Simple CLI Mode:
# Start simple CLI mode (uses default provider and model)
perspt --simple-cli
# With specific provider and model
perspt --simple-cli --provider-type openai --model gpt-4o-mini
# With Gemini
perspt --simple-cli --provider-type gemini --model gemini-1.5-flash
# With local Ollama (no API key needed)
perspt --simple-cli --provider-type ollama --model llama3.2
With Session Logging:
# Log entire session to a file
perspt --simple-cli --log-file my-session.txt
# Timestamped log files for organization
perspt --simple-cli --log-file "$(date +%Y%m%d_%H%M%S)_ai_session.txt"
# Combined with specific provider
perspt --simple-cli --provider-type anthropic --model claude-3-5-sonnet-20241022 --log-file claude-session.txt
Scripting Examples:
# Pipe questions directly to Perspt
echo "What is the capital of France?" | perspt --simple-cli
# Use in shell scripts
#!/bin/bash
question="Explain quantum computing in simple terms"
echo "$question" | perspt --simple-cli --log-file quantum-explanation.txt
# Chain multiple questions
{
echo "What is machine learning?"
echo "Give me 3 examples"
echo "exit"
} | perspt --simple-cli
๐จ Simple CLI Interface
The simple CLI provides a clean, minimal interface:
Perspt Simple CLI Mode
Model: gpt-4o-mini
Type 'exit' or press Ctrl+D to quit.
> What is the meaning of life?
The meaning of life is a profound philosophical question that has been
contemplated by humans throughout history. Different perspectives offer
various interpretations...
> How do I exit?
You can exit by typing 'exit' or pressing Ctrl+D.
> exit
Goodbye!
๐ Simple CLI Features
- ๐ฏ Unix-like Prompt: Simple
>
prompt that's familiar and non-intrusive - โก Real-time Streaming: Same streaming technology as TUI mode for responsive interaction
- ๐ Session Logging: Optional logging of both user input and AI responses
- ๐ช Clean Exit: Proper handling of
Ctrl+D
,exit
command, orCtrl+C
- ๐ง Full Configuration: Works with all providers, models, and authentication methods
- ๐ Error Resilience: Individual request errors don't terminate the session
๐ก Simple CLI vs TUI Mode
Feature | Simple CLI Mode | TUI Mode |
---|---|---|
Interface | Minimal prompt | Rich terminal UI |
Scrolling | Natural terminal | Built-in history |
Markdown | Raw text | Rendered formatting |
Navigation | Terminal native | Keyboard shortcuts |
Logging | Built-in option | Manual /save command |
Scripting | Excellent | Not suitable |
Accessibility | High | Moderate |
Resource Usage | Minimal | Moderate |
๐ง Advanced Simple CLI Usage
Environment Integration:
# Set up environment for regular use
export OPENAI_API_KEY="your-key"
alias ai="perspt --simple-cli"
alias ai-log="perspt --simple-cli --log-file"
# Now use anywhere
ai
ai-log research-session.txt
Configuration File for Simple CLI:
{
"provider_type": "openai",
"default_model": "gpt-4o-mini",
"api_key": "your-api-key"
}
# Use with simple CLI
perspt --simple-cli --config simple-cli-config.json
๐จ Migration from TUI to Simple CLI
If you prefer the simple CLI mode, you can make it your default:
# Create an alias for simple CLI as default
echo 'alias perspt="perspt --simple-cli"' >> ~/.bashrc # or ~/.zshrc
source ~/.bashrc
# Now 'perspt' starts in simple CLI mode by default
perspt --provider-type openai --model gpt-4o-mini
๏ฟฝ๐ฌ Chat Interface & Commands
Built-in Commands
Perspt includes several built-in commands that you can use during your chat session:
/save
- Export Conversation
# Save with a timestamped filename (e.g., conversation_1735123456.txt)
/save
# Save with a custom filename
/save my_important_chat.txt
The /save
command exports your entire conversation history (user messages and AI responses) to a plain text file. System messages are excluded from the export. The saved file includes:
- A header with the conversation title
- Timestamped messages in chronological order
- Raw text content without terminal formatting
Example saved conversation:
Perspt Conversation
==================
[2024-01-01 12:00:00] User: Hello, how are you?
[2024-01-01 12:00:01] Assistant: Hello! I'm doing well, thank you for asking...
[2024-01-01 12:01:30] User: Can you help me with Python?
[2024-01-01 12:01:31] Assistant: Of course! I'd be happy to help you with Python...
Key Bindings
Enter
: Send your input to the LLM or queue it if the LLM is busy.Esc
: Exit the application safely with proper terminal restoration.Ctrl+C
/Ctrl+D
: Exit the application with graceful cleanup.Up Arrow
/Down Arrow
: Scroll through chat history smoothly.Page Up
/Page Down
: Fast scroll through long conversations.
โ UI Improvements:
- Faster response times with 50ms event timeouts
- Better streaming buffer management for smooth markdown rendering with custom parser
- Visual feedback during model processing
- Proper terminal restoration on all exit paths
๐ Using Ollama for Local Models
Ollama provides a fantastic way to run AI models locally on your machine without needing API keys or internet connectivity. This is perfect for privacy-conscious users, offline work, or simply experimenting with different models.
๐ ๏ธ Setting Up Ollama
-
Install Ollama:
# macOS brew install ollama # Linux curl -fsSL https://ollama.ai/install.sh | sh # Or download from: https://ollama.ai
-
Start the Ollama service:
ollama serve
This starts the Ollama server at
http://localhost:11434
-
Download models:
# Llama 3.2 (3B) - Great balance of speed and capability ollama pull llama3.2 # Llama 3.1 (8B) - More capable, slightly slower ollama pull llama3.1:8b # Code Llama - Optimized for coding tasks ollama pull codellama # Mistral - General purpose model ollama pull mistral # Phi-3 - Microsoft's efficient model ollama pull phi3
-
List available models:
ollama list
๐ Using Ollama with Perspt
Once Ollama is running, you can use it with Perspt:
# Basic usage (no API key needed!)
target/release/perspt --provider-type ollama --model llama3.2
# List available Ollama models
target/release/perspt --provider-type ollama --list-models
# Use different models
target/release/perspt --provider-type ollama --model codellama # For coding
target/release/perspt --provider-type ollama --model mistral # General purpose
target/release/perspt --provider-type ollama --model llama3.1:8b # More capable
# With configuration file
cat > ollama_config.json << EOF
{
"provider_type": "ollama",
"default_model": "llama3.2",
"api_key": "not-required"
}
EOF
target/release/perspt --config ollama_config.json
๐ฏ Ollama Model Recommendations
Model | Size | Best For | Speed | Quality |
---|---|---|---|---|
llama3.2 |
3B | General chat, quick responses | โกโกโก | โญโญโญ |
llama3.1:8b |
8B | Balanced performance | โกโก | โญโญโญโญ |
codellama |
7B | Code generation, programming help | โกโก | โญโญโญโญ |
mistral |
7B | General purpose, good reasoning | โกโก | โญโญโญโญ |
phi3 |
3.8B | Efficient, good for resource-constrained systems | โกโกโก | โญโญโญ |
๐ง Ollama Troubleshooting
โ "Connection refused" errors:
# Make sure Ollama is running
ollama serve
# Check if it's responding
curl http://localhost:11434/api/tags
โ "Model not found" errors:
# List available models
ollama list
# Pull the model if not available
ollama pull llama3.2
โ Performance issues:
# Use smaller models for better performance
target/release/perspt --provider-type ollama --model llama3.2
# Or check system resources
htop # Monitor CPU/Memory usage
๐ Ollama Advantages
- ๐ Privacy: All processing happens locally, no data sent to external servers
- ๐ฐ Cost-effective: No API fees or usage limits
- โก Offline capable: Works without internet connectivity
- ๐๏ธ Full control: Choose exactly which models to run
- ๐ Easy model switching: Download and switch between models easily
๐๏ธ Architecture & Technical Features
Built on Modern genai Crate
Perspt is built using the genai crate (v0.3.5), providing:
-
๐ฏ Latest Model Support: Direct support for cutting-edge models including:
- OpenAI's o1-mini, o1-preview, o3-mini, and GPT-4.1 reasoning models
- Google's Gemini 2.5 Pro and Gemini 2.0 Flash
- Latest Claude, Mistral, and other provider models
-
โก Advanced Streaming: Proper handling of streaming events including:
ChatStreamEvent::Start
- Response initiationChatStreamEvent::Chunk
- Regular content chunksChatStreamEvent::ReasoningChunk
- Special reasoning model chunksChatStreamEvent::End
- Response completion
-
๐ก๏ธ Robust Error Handling: Comprehensive error management with:
- Network failure recovery
- API authentication validation
- Model compatibility checking
- Graceful panic recovery with terminal restoration
-
๐ง Flexible Configuration: Multiple configuration methods:
- CLI arguments (working properly!)
- Environment variables
- JSON configuration files
- Smart fallbacks and validation
Custom Markdown Parser
Perspt includes a custom-built markdown parser optimized for terminal rendering:
- Stream-optimized: Handles real-time streaming content efficiently
- Terminal-native: Designed specifically for terminal color capabilities
- Lightweight: No external dependencies, built for performance
- Robust: Handles partial and malformed markdown gracefully
- Buffer-managed: Intelligent buffering for smooth rendering during streaming
Key Technical Improvements
- Fixed CLI Arguments: API keys and model selection now work correctly via command line
- Enhanced Streaming: Improved buffering and event handling for smooth response rendering
- Better Authentication: Proper environment variable mapping for different providers
- Responsive UI: Reduced timeouts and improved responsiveness (50ms vs 100ms)
- Custom Markdown Rendering: Built-in parser eliminates external dependencies
- Comprehensive Documentation: Extensive code documentation and user guides
๐๏ธ Key Bindings
Enter
: Send your input to the LLM or queue it if the LLM is busy.Esc
: Exit the application safely with proper terminal restoration.Ctrl+C
/Ctrl+D
: Exit the application with graceful cleanup.Up Arrow
/Down Arrow
: Scroll through chat history smoothly.Page Up
/Page Down
: Fast scroll through long conversations.
โ UI Improvements:
- Faster response times with 50ms event timeouts
- Better streaming buffer management for smooth markdown rendering with custom parser
- Visual feedback during model processing
- Proper terminal restoration on all exit paths
๐ฅ Recent Major Updates (v0.4.5)
New Simple CLI Mode (PSP-000003)
We've added a new Simple CLI Mode that provides a minimal, Unix-like command prompt interface for direct Q&A without the TUI overlay:
๐ฏ New Features:
- โ
--simple-cli
Flag: Enable minimal command-line mode for direct interaction - โ
--log-file <FILE>
Option: Optional session logging with timestamped conversations - โ
Unix-style Interface: Simple
>
prompt with clean text output - โ Scripting Support: Perfect for automation, piping, and shell integration
- โ Accessibility: Simple scrolling output for users with accessibility needs
๐ Usage Examples:
# Basic simple CLI mode
perspt --simple-cli
# With session logging
perspt --simple-cli --log-file session.txt
# Perfect for scripting
echo "What is quantum computing?" | perspt --simple-cli
# Works with all providers
perspt --simple-cli --provider-type anthropic --model claude-3-5-sonnet-20241022
Migration to genai Crate (v0.4.0)
We've migrated from the allms
crate to the modern genai crate (v0.3.5), bringing significant improvements:
๐ฏ Fixed Critical Issues:
- โ CLI Arguments Now Work: API keys, models, and provider types work correctly via command line
- โ Flexible Authentication: API keys work via CLI, environment variables, or config files
- โ Responsive UI: Fixed keystroke waiting issues - UI now responds immediately
- โ Custom Markdown Parser: Built-in markdown parser eliminates external dependencies
๐ New Features:
- Support for latest reasoning models (o1-mini, o1-preview, Gemini 2.5 Pro)
- Enhanced streaming with proper reasoning chunk handling
- Custom markdown parser optimized for terminal rendering
- Comprehensive error handling with terminal restoration
- Model validation before UI startup
- Extensive code documentation and user guides
๐ก๏ธ Reliability Improvements:
- Bulletproof panic handling that restores terminal state
- Network failure recovery
- Better error messages with troubleshooting tips
- Comprehensive logging for debugging
๐จ User Experience:
- Reduced response latency (50ms vs 100ms timeouts)
- Smoother markdown rendering with custom parser
- Better visual feedback during processing
- Improved chat history navigation
๐ง Troubleshooting
Common Issues & Solutions
โ "API key not found" or authentication errors:
# Method 1: Use CLI argument (recommended)
perspt --provider-type openai --api-key YOUR_API_KEY --model gpt-4o-mini
# Method 2: Set environment variable
export OPENAI_API_KEY="your-key-here"
export GOOGLE_API_KEY="your-gemini-key-here"
export ANTHROPIC_API_KEY="your-claude-key-here"
export GROQ_API_KEY="your-groq-key-here"
export COHERE_API_KEY="your-cohere-key-here"
export XAI_API_KEY="your-xai-key-here"
export DEEPSEEK_API_KEY="your-deepseek-key-here"
# Method 3: Ollama doesn't need API keys
perspt --provider-type ollama --model llama3.2
โ "Model not found" errors:
# List available models first
perspt --provider-type openai --api-key YOUR_KEY --list-models
# Use exact model names from the list
perspt --provider-type openai --api-key YOUR_KEY --model gpt-4o-mini
โ Terminal corruption after crash:
# Reset terminal (if needed)
reset
stty sane
โ Permission denied errors:
# Make sure the binary is executable
chmod +x target/release/perspt
# Or use cargo run for development
cargo run -- --provider-type openai --api-key YOUR_KEY
โ Documentation generation errors:
# If you see "Unrecognized option" errors when generating docs:
cargo doc --no-deps
# The project includes custom rustdoc styling that's compatible with rustdoc 1.87.0+
โ Getting Help:
- Use
--help
for full argument list:perspt --help
- Check logs with:
RUST_LOG=debug perspt ...
- Validate configuration with:
perspt --list-models
- Test different providers to isolate issues
Best Practices
-
Always validate your setup first:
perspt --provider-type YOUR_PROVIDER --api-key YOUR_KEY --list-models
-
Use environment variables for security:
export OPENAI_API_KEY="sk-..." perspt --provider-type openai --model gpt-4o-mini
-
Start with simple models:
# These are reliable and fast perspt --provider-type openai --model gpt-4o-mini perspt --provider-type gemini --model gemini-1.5-flash perspt --provider-type ollama --model llama3.2 # No API key needed!
-
Check the logs if issues persist:
RUST_LOG=debug perspt --provider-type openai --model gpt-4o-mini
๐ CI/CD & Releases
This project uses GitHub Actions for comprehensive CI/CD:
๐งช Continuous Integration
- Multi-Platform Testing: Automated testing on Ubuntu, Windows, and macOS
- Code Quality: Automated formatting checks, clippy linting, and security audits
- Documentation: Automated building of both Rust API docs and Sphinx documentation
๐ฆ Automated Releases
- Cross-Platform Binaries: Automatic generation of optimized binaries for:
- Linux (x86_64)
- Windows (x86_64)
- macOS (x86_64 and ARM64)
- Documentation Packaging: Complete documentation bundles included in releases
- Checksum Generation: SHA256 checksums for all release artifacts
๐ Documentation Deployment
- GitHub Pages: Automatic deployment of documentation to GitHub Pages
- Dual Documentation: Both user guides (Sphinx) and API documentation (rustdoc)
- Live Updates: Documentation automatically updates on main branch changes
๐ฏ Getting Pre-built Binaries
Instead of building from source, you can download pre-built binaries from the releases page:
- Navigate to the latest release
- Download the appropriate binary for your platform
- Make it executable:
chmod +x perspt-*
(Linux/macOS) - Move to your PATH:
sudo mv perspt-* /usr/local/bin/perspt
๐ Documentation
- Live Documentation: https://eonseed.github.io/perspt/
- User Guide: Comprehensive tutorials and usage examples
- API Documentation: Detailed Rust API documentation
๐ค Contributing
Contributions are welcome! Please open issues or submit pull requests for any bugs, features, or improvements.
Development Workflow
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Ensure CI passes locally:
cargo test && cargo clippy && cargo fmt --check
- Submit a pull request
The CI will automatically test your changes on all supported platforms.
๐ License
Perspt is released under the GNU Lesser General Public License v3.0 (LGPL-3.0). See the LICENSE
file for details.
โ๏ธ Author
- Vikrant Rathore
- Ronak Rathore
Perspt: Personal Spectrum Pertaining Thoughts โ the human lens through which we explore the enigma of AI and its implications for humanity.
Dependencies
~17โ32MB
~506K SLoC