7 releases
new 0.1.6 | Dec 19, 2024 |
---|---|
0.1.5 | Dec 19, 2024 |
#691 in Development tools
42 downloads per month
27KB
486 lines
aicommit
A CLI tool that generates concise and descriptive git commit messages using LLMs (Large Language Models).
Features
- 🤖 Uses LLMs to generate meaningful commit messages from your changes
- 🔄 Supports multiple LLM providers (OpenRouter, Ollama)
- ⚡ Fast and efficient - works directly from your terminal
- 🛠️ Easy configuration and customization
- 💰 Transparent token usage and cost tracking
- 📦 Version management with automatic incrementation
Installation
Install via cargo:
cargo install aicommit
Or build from source:
git clone https://github.com/yourusername/aicommit
cd aicommit
cargo install --path .
Quick Start
- Add a provider:
aicommit --add
-
Make some changes to your code
-
Create a commit:
aicommit
Provider Management
List all configured providers:
aicommit --list
Set active provider:
aicommit --set <provider-id>
Version Management
Automatically increment version in a file before commit:
aicommit --version-file "./version" --version-iterate
Synchronize version with Cargo.toml:
aicommit --version-file "./version" --version-cargo
Both operations can be combined:
aicommit --version-file "./version" --version-cargo --version-iterate
Configuration
The configuration file is stored at ~/.aicommit.json
. You can edit it directly with:
aicommit --config
Provider Configuration
Each provider can be configured with the following settings:
max_tokens
: Maximum number of tokens in the response (default: 50)temperature
: Controls randomness in the response (0.0-1.0, default: 0.3)
For OpenRouter, token costs are automatically fetched from their API. For Ollama, you can specify your own costs if you want to track usage.
Supported LLM Providers
OpenRouter
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "openrouter",
"api_key": "sk-or-v1-...",
"model": "mistralai/mistral-tiny",
"max_tokens": 50,
"temperature": 0.3,
"input_cost_per_1k_tokens": 0.25,
"output_cost_per_1k_tokens": 0.25
}],
"active_provider": "550e8400-e29b-41d4-a716-446655440000"
}
Ollama
{
"providers": [{
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"provider": "ollama",
"url": "http://localhost:11434",
"model": "llama2",
"max_tokens": 50,
"temperature": 0.3,
"input_cost_per_1k_tokens": 0.0,
"output_cost_per_1k_tokens": 0.0
}],
"active_provider": "67e55044-10b1-426f-9247-bb680e5fe0c8"
}
Usage Information
When generating a commit message, the tool will display:
- Number of tokens used (input and output)
- Total API cost (calculated separately for input and output tokens)
Example output:
Generated commit message: Add support for multiple LLM providers
Tokens: 8↑ 32↓
API Cost: $0.0100
You can have multiple providers configured and switch between them by changing the active_provider
field to match the desired provider's id
.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Dependencies
~12–24MB
~356K SLoC