#commit-message #conventional-commits #git-commit #commit

app turbocommit

A CLI tool to create commit messages with OpenAI GPT models

51 releases (7 stable)

new 1.4.0 Feb 12, 2025
1.1.1 Jan 24, 2025
0.20.1 Oct 19, 2024
0.17.0 May 25, 2024
0.8.4 Mar 29, 2023

#164 in Machine learning

Download history 19/week @ 2024-10-22 1/week @ 2024-10-29 10/week @ 2024-12-03 137/week @ 2024-12-10 218/week @ 2025-01-21 131/week @ 2025-01-28 373/week @ 2025-02-04

722 downloads per month

MIT license

80KB
2K SLoC

turboCommit

Crates.io Crates.io Crates.io

A powerful CLI tool that leverages OpenAI's GPT models to generate high-quality, conventional commit messages from your staged changes.

Features

  • 🤖 Uses OpenAI's GPT models to analyze your staged changes
  • 📝 Generates conventional commit messages that follow best practices
  • 🎯 Interactive selection from multiple commit message suggestions
  • ✏️ Edit messages directly or request AI revisions
  • 🧠 Advanced reasoning mode for enhanced AI interactions
  • 🔍 Comprehensive debugging capabilities with file or stdout logging
  • ⚡ Streaming responses for real-time feedback
  • 🔄 Auto-update checks to keep you on the latest version
  • 🎨 Beautiful terminal UI with color-coded output
  • ⚙️ Configurable settings via YAML config file

Installation

cargo install turbocommit

Pro tip: Add an alias to your shell configuration for quicker access:

# Add to your .bashrc, .zshrc, etc.
alias tc='turbocommit'

Usage

  1. Stage your changes:
git add .  # or stage specific files
  1. Generate commit messages:
turbocommit  # or 'tc' if you set up the alias

After generating commit messages, you can:

  • Select your preferred message from multiple suggestions
  • Edit the message directly before committing
  • Request AI revisions with additional context or requirements
  • Commit the message once you're satisfied

Options

  • -n <number> - Number of commit message suggestions to generate
  • -t <temperature> - Temperature for GPT model (0.0 to 2.0) (no effect in reasoning mode)
  • -f <frequency_penalty> - Frequency penalty (-2.0 to 2.0)
  • -m <model> - Specify the GPT model to use
  • -r, --enable-reasoning - Enable support for models with reasoning capabilities (like o-series)
  • --reasoning-effort <level> - Set reasoning effort for supported models (low/medium/high, default: medium)
  • -d, --debug - Show basic debug info in console
  • --debug-file <path> - Write detailed debug logs to file (use '-' for stdout)
  • --auto-commit - Automatically commit with the generated message
  • --api-key <key> - Provide API key directly
  • --api-endpoint <url> - Custom API endpoint URL
  • -p, --print-once - Disable streaming output

Reasoning Mode

When using models that support reasoning capabilities (like OpenAI's o-series), this mode enables their built-in reasoning features. These models are specifically designed to analyze code changes and generate commit messages with their own reasoning process.

Example usage:

turbocommit -r -m o3-mini -n 1  # Enable reasoning mode with default effort
turbocommit -r --reasoning-effort high -m o3-mini -n 1  # Specify reasoning effort

Debugging

Debug output helps troubleshoot API interactions:

turbocommit -d  # Basic info to console
turbocommit --debug-file debug.log  # Detailed logs to file
turbocommit --debug-file -  # Detailed logs to stdout

The debug logs include:

  • Request details (model, tokens, parameters)
  • API responses and errors
  • Timing information
  • Full request/response JSON (in file mode)

Model-Specific Notes

Different models have different capabilities and limitations:

O-Series Models (e.g., o3-mini)

  • Support reasoning mode
  • Do not support temperature/frequency parameters
  • May not support multiple choices (-n)
  • Optimized for specific tasks

Standard GPT Models

  • Support all parameters
  • Multiple choices available
  • Temperature and frequency tuning
  • Standard reasoning capabilities

For more options, run:

turbocommit --help

Configuration

turboCommit creates a config file at ~/.turbocommit.yaml on first run. You can customize:

  • Default model
  • API endpoint
  • Temperature and frequency penalty
  • Number of suggestions
  • System message prompt
  • Auto-update checks
  • Reasoning mode defaults
  • And more!

Example configuration:

model: "gpt-4"
default_temperature: 1.0
default_frequency_penalty: 0.0
default_number_of_choices: 3
enable_reasoning: true
reasoning_effort: "medium"
disable_print_as_stream: false
disable_auto_update_check: false

Contributing

Contributions are welcome! Feel free to open issues and pull requests.

License

Licensed under MIT - see the LICENSE file for details.

Dependencies

~34–51MB
~750K SLoC