#artificial-intelligence #ai-agent #tui #chat #tool

app blitzdenk

A minimal multi api auto-context project chat bot as tui

4 releases

Uses new Rust 2024

0.2.1 May 8, 2025
0.2.0 May 8, 2025
0.1.1 Apr 30, 2025
0.1.0 Apr 30, 2025

#46 in Machine learning

Download history 259/week @ 2025-04-30 222/week @ 2025-05-07

481 downloads per month

Custom license

99KB
3K SLoC

Blitzdenk - Multi API AI Tui

License: Apache 2.0 Crate

A minimal, concise auto-context project chat bot. A replacement for dying search.

blitz.webm

Using basic CLI tools to quickly find information relevant to your question.

(ripgrep, tree, cat, etc ... )

Install

clone + make install will build the and copy the bin to ~/.local/bin

or

cargo install blitzdenk

Dependencies

The following linux cli tools are required and must be installed.

  • rg (ripgrep)
  • tree

Features

  • can navigate and read your project.
  • can read and write to local project memory ('memo.md' in cwd).
  • can crawl links and read docs. (drop links in 'memo.md' or chat).
  • can read git logs.

Configure

Use the config command. Save API keys and models.

blitzdenk config

Default config file is saved at: ~/.cache/blitzdenk/config.

Use

Basic chat in cwd. Optional you can pass a path to the desired working directory.

#openai
blitzdenk chat openai

#ollama
blitzdenk chat ollama ./path/to/project

#gemini
blitzdenk chat gemini

Yolo mode

Same as chat. But does not ask for permission, when mutating the project.

yolo.webm

It's like cursor, but less safe.

blitzdenk yolo openai

Currently Supports

Any model. Might fail on some.

  • OpenAi (gpt4.1, best so far)
  • Ollama (qwen3, pretty good)
  • Gemini

Neovim

It's a simple no-border tui. Perfect to use in the Neovim term buffer.

vim.keymap.set(("n", "<leader>o", ":vsplit term:// blitzdenk chat openai<CR>:startinsert<CR>", {})

The AI pipeline approach

Agents running in a loop tend to explode small lies into big ones after n iterations. So instead of looping the best way to get good results is in a forward pipeline.

Question -> collect context -> answer -> correction. Restart

Conclusion: Restart chats often. 1 question/task per chat.

Dependencies

~20–35MB
~535K SLoC