57 releases
new 0.1.135 | May 9, 2025 |
---|---|
0.1.134 | May 9, 2025 |
0.1.126 | Apr 8, 2025 |
0.1.125 | Mar 23, 2025 |
0.1.6 | Dec 19, 2024 |
#84 in Build Utils
324 downloads per month
7.5MB
2.5K
SLoC
Contains (Mach-o exe, 7MB) bin/aicommit-macos-x86_64, (Zip file, 215KB) suenot-aicommit-vscode-0.1.0.vsix
aicommit
A CLI tool that generates concise and descriptive git commit messages using LLMs (Large Language Models).
Features
Implemented Features
- ✅ Uses LLMs to generate meaningful commit messages from your changes
- ✅ Supports multiple LLM providers:
- OpenRouter (cloud)
- Simple Free OpenRouter (automatically uses best available free models)
- Ollama (local)
- OpenAI-compatible endpoints (LM Studio, local OpenAI proxy, etc.)
- ✅ Automatically stages changes with
--add
option - ✅ Pushes commits automatically with
--push
- ✅ Interactive mode with
--dry-run
- ✅ Watch mode with
--watch
- ✅ Verbose mode with
--verbose
- ✅ Version control helpers:
- Automatic version bumping (
--version-iterate
) - Cargo.toml version sync (
--version-cargo
) - package.json version sync (
--version-npm
) - GitHub version update (
--version-github
)
- Automatic version bumping (
- ✅ Smart retry mechanism for API failures
- ✅ Easy configuration management
- ✅ VS Code extension available
Simple Free Mode
The Simple Free mode allows you to use OpenRouter's free models without having to manually select a model. You only need to provide an OpenRouter API key, and the system will:
- Automatically query OpenRouter for currently available free models
- Select the best available free model based on an internally ranked list
- Automatically switch to alternative models if one fails
- Track which models have failed and avoid them in future attempts
- Fall back to predefined free models if network connectivity is unavailable
To set up Simple Free mode:
# Interactive setup
aicommit --add-provider
# Select "Simple Free OpenRouter" from the menu
# Or non-interactive setup
aicommit --add-simple-free --openrouter-api-key=<YOUR_API_KEY>
Benefits of Simple Free Mode
- Zero Cost: Uses only free models from OpenRouter
- Automatic Selection: No need to manually choose the best free model
- Resilient Operation: If one model fails, it automatically switches to the next best model
- Smart Failover: Remembers which models have failed and avoids them in future attempts
- Always Up-to-Date: Checks for currently available free models each time
- Best Quality First: Uses a predefined ranking of models, prioritizing the most powerful ones
- Future-Proof: Intelligently handles new models by analyzing model names for parameter counts
- Offline Capable: Works even when network connectivity to OpenRouter is unavailable by using predefined models
The ranked list includes powerful models like:
- Meta's Llama 4 Maverick and Scout
- NVIDIA's Nemotron Ultra models (253B parameters)
- Qwen's massive 235B parameter models
- Many large models from the 70B+ parameter family
- And dozens of other high-quality free options of various sizes
Even if the preferred models list becomes outdated over time, the system will intelligently identify the best available models based on their parameter size by analyzing model names (e.g., models with "70b" or "32b" in their names).
For developers who want to see all available free models, a utility script is included:
python bin/get_free_models.py
This script will:
- Fetch all available models from OpenRouter
- Identify which ones are free
- Save the results to JSON and text files for reference
- Display a summary of available options
Installation
To install aicommit, use the following npm command:
npm install -g aicommit
For Rust users, you can install using cargo:
cargo install aicommit
Quick Start
- Set up a provider:
aicommit --add-provider
- Generate a commit message:
git add .
aicommit
- Or stage and commit in one step:
aicommit --add
Provider Management
Add a provider in interactive mode:
aicommit --add-provider
Add providers in non-interactive mode:
# Add OpenRouter provider
aicommit --add-provider --add-openrouter --openrouter-api-key "your-api-key" --openrouter-model "mistralai/mistral-tiny"
# Add Ollama provider
aicommit --add-provider --add-ollama --ollama-url "http://localhost:11434" --ollama-model "llama2"
# Add OpenAI compatible provider
aicommit --add-provider --add-openai-compatible \
--openai-compatible-api-key "your-api-key" \
--openai-compatible-api-url "https://api.deep-foundation.tech/v1/chat/completions" \
--openai-compatible-model "gpt-4o-mini"
Optional parameters for non-interactive mode:
--max-tokens
- Maximum number of tokens (default: 50)--temperature
- Controls randomness (default: 0.3)
List all configured providers:
aicommit --list
Set active provider:
aicommit --set <provider-id>
Version Management
aicommit supports automatic version management with the following features:
- Automatic version incrementation using a version file:
aicommit --version-file version --version-iterate
- Synchronize version with Cargo.toml:
aicommit --version-file version --version-iterate --version-cargo
- Synchronize version with package.json:
aicommit --version-file version --version-iterate --version-npm
- Update version on GitHub (creates a new tag):
aicommit --version-file version --version-iterate --version-github
You can combine these flags to update multiple files at once:
aicommit --version-file version --version-iterate --version-cargo --version-npm --version-github
VS Code Extension
aicommit now includes a VS Code extension for seamless integration with the editor:
- Navigate to the vscode-extension directory
cd vscode-extension
- Install the extension locally for development
code --install-extension aicommit-vscode-0.1.0.vsix
Or build the extension package manually:
# Install vsce if not already installed
npm install -g @vscode/vsce
# Package the extension
vsce package
Once installed, you can generate commit messages directly from the Source Control view in VS Code by clicking the "AICommit: Generate Commit Message" button.
See the VS Code Extension README for more details.
Configuration
The configuration file is stored at ~/.aicommit.json
. You can edit it directly with:
aicommit --config
Global Configuration
The configuration file supports the following global settings:
{
"providers": [...],
"active_provider": "provider-id",
"retry_attempts": 3 // Number of attempts to generate commit message if provider fails
}
retry_attempts
: Number of retry attempts if provider fails (default: 3)- Waits 5 seconds between attempts
- Shows informative messages about retry progress
- Can be adjusted based on your needs (e.g., set to 5 for less stable providers)
Provider Configuration
Each provider can be configured with the following settings:
max_tokens
: Maximum number of tokens in the response (default: 200)temperature
: Controls randomness in the response (0.0-1.0, default: 0.3)
Example configuration with all options:
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "openrouter",
"api_key": "sk-or-v1-...",
"model": "mistralai/mistral-tiny",
"max_tokens": 200,
"temperature": 0.3
}],
"active_provider": "550e8400-e29b-41d4-a716-446655440000",
"retry_attempts": 3
}
For OpenRouter, token costs are automatically fetched from their API. For Ollama, you can specify your own costs if you want to track usage.
Supported LLM Providers
Simple Free OpenRouter
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "simple_free_openrouter",
"api_key": "sk-or-v1-...",
"max_tokens": 50,
"temperature": 0.3,
"failed_models": []
}],
"active_provider": "550e8400-e29b-41d4-a716-446655440000"
}
The Simple Free mode offers a hassle-free way to use OpenRouter's free models:
- Automatic Model Selection: No need to specify a model - the system queries OpenRouter's API for all available models and filters for free ones
- Intelligent Ranking: Uses an internal ranked list of preferred models (maintained in the codebase) to select the best available free model
- Failure Handling: If a model fails, it's added to the
failed_models
array and won't be used again in future attempts - Fallback System: Automatically falls back to the next best available model if the preferred one fails
- Network Resilience: Can operate even when your network connection to OpenRouter is unavailable by using predefined models
- Free Usage: Takes advantage of OpenRouter's free models that have free quotas or free access
- Future-Proof Design: Even when new models appear that aren't in the preferred list, the system can intelligently identify high-quality models by analyzing model names for parameter counts (e.g., models with "70b" in their name are prioritized over those with "7b")
- Smart Model Analysis: Uses a sophisticated algorithm to extract parameter counts from model names and prioritize larger models when none of the preferred models are available
This approach ensures that your aicommit
installation will continue to work effectively even years later, as it can adapt to the changing landscape of available free models on OpenRouter.
This is the recommended option for most users who want to use aicommit without worrying about model selection or costs.
OpenRouter
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "openrouter",
"api_key": "sk-or-v1-...",
"model": "mistralai/mistral-tiny",
"max_tokens": 50,
"temperature": 0.3,
"input_cost_per_1k_tokens": 0.25,
"output_cost_per_1k_tokens": 0.25
}],
"active_provider": "550e8400-e29b-41d4-a716-446655440000"
}
Recommended Providers through OpenRouter
- 🌟 Google AI Studio - 1000000 tokens for free
- "google/gemini-2.0-flash-exp:free"
- 🌟 DeepSeek
- "deepseek/deepseek-chat"
Ollama
{
"providers": [{
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"provider": "ollama",
"url": "http://localhost:11434",
"model": "llama2",
"max_tokens": 50,
"temperature": 0.3,
"input_cost_per_1k_tokens": 0.0,
"output_cost_per_1k_tokens": 0.0
}],
"active_provider": "67e55044-10b1-426f-9247-bb680e5fe0c8"
}
OpenAI-compatible API
You can use any service that provides an OpenAI-compatible API endpoint.
Example: DeepGPTBot
For example, you can use DeepGPTBot's OpenAI-compatible API for generating commit messages. Here's how to set it up:
-
Get your API key from Telegram:
- Open @DeepGPTBot in Telegram
- Use the
/api
command to get your API key
-
Configure aicommit (choose one method):
Interactive mode:
aicommit --add-provider
Select "OpenAI Compatible" and enter:
- API Key: Your key from @DeepGPTBot
- API URL: https://api.deep-foundation.tech/v1/chat/completions
- Model: gpt-4o-mini
- Max tokens: 50 (default)
- Temperature: 0.3 (default)
Non-interactive mode:
aicommit --add-provider --add-openai-compatible \ --openai-compatible-api-key "your-api-key" \ --openai-compatible-api-url "https://api.deep-foundation.tech/v1/chat/completions" \ --openai-compatible-model "gpt-4o-mini"
-
Start using it:
aicommit
Example: LM Studio
LM Studio runs a local server that is OpenAI-compatible. Here's how to configure aicommit
to use it:
-
Start LM Studio: Launch the LM Studio application.
-
Load a Model: Select and load the model you want to use (e.g., Llama 3, Mistral).
-
Start the Server: Navigate to the "Local Server" tab (usually represented by
<->
) and click "Start Server". -
Note the URL: LM Studio will display the server URL, typically
http://localhost:1234/v1/chat/completions
. -
Configure aicommit (choose one method):
Interactive mode:
aicommit --add-provider
Select "OpenAI Compatible" and enter:
- API Key:
lm-studio
(or any non-empty string, as it's often ignored by the local server) - API URL:
http://localhost:1234/v1/chat/completions
(or the URL shown in LM Studio) - Model:
lm-studio-model
(or any descriptive name; the actual model used is determined by what's loaded in LM Studio) - Max tokens: 50 (or adjust as needed)
- Temperature: 0.3 (or adjust as needed)
Important: The
Model
field here is just a label foraicommit
. The actual LLM used (e.g.,llama-3.2-1b-instruct
) is determined by the model you have loaded and selected within the LM Studio application's server tab.Non-interactive mode:
aicommit --add-provider --add-openai-compatible \ --openai-compatible-api-key "lm-studio" \ --openai-compatible-api-url "http://localhost:1234/v1/chat/completions" \ --openai-compatible-model "mlx-community/Llama-3.2-1B-Instruct-4bit"
- API Key:
-
Select the Provider: If this isn't your only provider, make sure it's active using
aicommit --set <provider-id>
. You can find the ID usingaicommit --list
. -
Start using it:
aicommit
Keep the LM Studio server running while using
aicommit
.
Upcoming Features
- ⏳ Hooks for Git systems (pre-commit, post-commit)
- ⏳ Support for more LLM providers
- ⏳ Integration with IDEs and editors
- ⏳ EasyCode: VS Code integration for commit message generation directly from editor
- ⏳ Command history and reuse of previous messages
- ⏳ Message templates and customization options
Usage Information
When generating a commit message, the tool will display:
- Number of tokens used (input and output)
- Total API cost (calculated separately for input and output tokens)
Example output:
Generated commit message: Add support for multiple LLM providers
Tokens: 8↑ 32↓
API Cost: $0.0100
You can have multiple providers configured and switch between them by changing the active_provider
field to match the desired provider's id
.
Staging Changes
By default, aicommit will only commit changes that have been staged using git add
. To automatically stage all changes before committing, use the --add
flag:
# Only commit previously staged changes
aicommit
# Automatically stage and commit all changes
aicommit --add
# Stage all changes, commit, and push (automatically sets up upstream if needed)
aicommit --add --push
# Stage all changes, pull before commit, and push after (automatically sets up upstream if needed)
aicommit --add --pull --push
Automatic Upstream Branch Setup
When using --pull
or --push
flags, aicommit automatically handles upstream branch configuration:
-
If the current branch has no upstream set:
# Automatically runs git push --set-upstream origin <branch> when needed aicommit --push # Automatically sets up tracking and pulls changes aicommit --pull
-
For new branches:
- With
--push
: Creates the remote branch and sets up tracking - With
--pull
: Skips pull if remote branch doesn't exist yet - No manual
git push --set-upstream origin <branch>
needed
- With
This makes working with new branches much easier, as you don't need to manually configure upstream tracking.
Watch Mode
The watch mode allows you to automatically commit changes when files are modified. This is useful for:
- Automatic backups of your work
- Maintaining a detailed history of changes
- Not forgetting to commit your changes
Basic Watch Mode
aicommit --watch # Monitor files continuously and commit on changes
Watch with Edit Delay
You can add a delay after the last edit before committing. This helps avoid creating commits while you're still actively editing files:
aicommit --watch --wait-for-edit 30s # Monitor files continuously, but wait 30s after last edit before committing
Time Units for wait-for-edit
s
: secondsm
: minutesh
: hours
Additional Options
You can combine watch mode with other flags:
# Watch with auto-push
aicommit --watch --push
# Watch with version increment
aicommit --watch --add --version-file version --version-iterate
# Interactive mode with watch
aicommit --watch --dry-run
Tips
- Use
--wait-for-edit
when you want to avoid partial commits - For active editing, set longer wait times (e.g.,
--wait-for-edit 1m
) - For quick commits after small changes, don't use
--wait-for-edit
- Use
Ctrl+C
to stop watching
Algorithm of Operation
Below is a flowchart diagram of the aicommit program workflow:
flowchart TD
A[Start aicommit] --> B{Check parameters}
%% Main flags processing
B -->|--help| C[Show help]
B -->|--version| D[Show version]
B -->|--add-provider| E[Add new provider]
B -->|--list| F[List providers]
B -->|--set| G[Set active provider]
B -->|--config| H[Edit configuration]
B -->|--dry-run| I[Message generation mode without commit]
B -->|standard mode| J[Standard commit mode]
B -->|--watch| K[File change monitoring mode]
B -->|--simulate-offline| Offline[Simulate offline mode]
%% Provider addition
E -->|interactive| E1[Interactive setup]
E -->|--add-openrouter| E2[Add OpenRouter]
E -->|--add-ollama| E3[Add Ollama]
E -->|--add-openai-compatible| E4[Add OpenAI compatible API]
E -->|--add-simple-free| E_Free[Add Simple Free OpenRouter]
E1 --> E5[Save configuration]
E2 --> E5
E3 --> E5
E4 --> E5
E_Free --> E5
%% Main commit process
J --> L[Load configuration]
L --> M{Versioning}
M -->|--version-iterate| M1[Update version]
M -->|--version-cargo| M2[Update in Cargo.toml]
M -->|--version-npm| M3[Update in package.json]
M -->|--version-github| M4[Create GitHub tag]
M1 --> N
M2 --> N
M3 --> N
M4 --> N
M -->|no versioning options| N[Get git diff]
%% Git operations
N -->|--add| N1[git add .]
N1 --> N_Truncate["Smart diff processing (truncate large files only)"]
N -->|only staged changes| N_Truncate["Smart diff processing (truncate large files only)"]
N_Truncate --> O["Generate commit message (using refined prompt)"]
%% Simple Free OpenRouter branch
O -->|Simple Free OpenRouter| SF1["Query OpenRouter API for available free models"]
SF1 --> SF_Network{Network available?}
SF_Network -->|Yes| SF2["Filter for free models"]
SF_Network -->|No| SF3["Use fallback predefined free models list"]
SF2 --> SF4["Sort by preferred model order"]
SF3 --> SF4
SF4 --> SF5["Select best available model"]
SF5 --> SF6["Generate commit using selected model"]
SF6 --> SF7["Display which model was used"]
SF7 --> P
%% Normal provider branch
O -->|Other providers| P{Success?}
P -->|Yes| Q[Create commit]
P -->|No| P1{Retry limit reached?}
P1 -->|Yes| P2[Generation error]
P1 -->|No| P3[Retry after 5 sec]
P3 --> O
Q --> R{Additional operations}
R -->|--pull| R1[Sync with remote repository]
R -->|--push| R2[Push changes to remote]
R1 --> S[Done]
R2 --> S
R -->|no additional options| S
%% Improved watch mode with timer reset logic
K --> K1[Initialize file monitoring system]
K1 --> K2[Start monitoring for changes]
K2 --> K3{File change detected?}
K3 -->|Yes| K4[Log change to terminal]
K3 -->|No| K2
K4 --> K5{--wait-for-edit specified?}
K5 -->|No| K7[git add changed file]
K5 -->|Yes| K6[Check if file is already in waiting list]
K6 --> K6A{File in waiting list?}
K6A -->|Yes| K6B[Reset timer for this file]
K6A -->|No| K6C[Add file to waiting list with current timestamp]
K6B --> K2
K6C --> K2
%% Parallel process for waiting list with timer reset logic
K1 --> K8[Check waiting list every second]
K8 --> K9{Any files in waiting list?}
K9 -->|No| K8
K9 -->|Yes| K10[For each file in waiting list]
K10 --> K11{Time since last modification >= wait-for-edit time?}
K11 -->|No| K8
K11 -->|Yes| K12[git add stable files]
K12 --> K13["Start commit process (includes smart diff processing & message generation)"]
K13 --> K14[Remove committed files from waiting list]
K14 --> K8
K7 --> K13
%% Dry run
I --> I1[Load configuration]
I1 --> I2[Get git diff]
I2 --> I3_Truncate["Smart diff processing (truncate large files only)"]
I3_Truncate --> I3["Generate commit message (using refined prompt)"]
I3 --> I4[Display result without creating commit]
%% Offline mode simulation
Offline --> Offline1[Skip network API calls]
Offline1 --> Offline2[Use predefined model list]
Offline2 --> J
License
This project is licensed under the MIT License - see the LICENSE file for details.
Dependencies
~13–28MB
~397K SLoC