#artificial-intelligence #ai-agent #tui #chat #tool

app blitzdenk

A minimal multi api auto-context project chat bot as tui

2 releases

Uses new Rust 2024

new 0.1.1 Apr 30, 2025
0.1.0 Apr 30, 2025

#79 in Machine learning

Download history 121/week @ 2025-04-25

121 downloads per month

Custom license

73KB
2K SLoC

Blitzdenk - Multi API AI Tui

License: Apache 2.0 Crate

A minimal, concise auto-context project chat bot. A replacement for dying search.

blitz.webm

Using basic CLI tools to quickly find information relevant to your question.

(ripgrep, tree, cat, etc ... )

Install

clone + make install will build the and copy the bin to ~/.local/bin

or

cargo install blitzdenk

Dependencies

The following linux cli tools are required and must be installed.

  • rg (ripgrep)
  • tree

Features

  • can navigate and read your project.
  • can read and write to local project memory ('memo.md' in cwd).
  • can crawl links and read docs. (drop links in 'memo.md' or chat).
  • can read git logs.

Configure

Use the config command. Save API keys and models.

blitzdenk config

Default config file is saved at: ~/.cache/blitzdenk/config.

Use

Basic chat in cwd. Optional you can pass a path to the desired working directory.

#openai
blitzdenk chat openai

#ollama
blitzdenk chat ollama ./path/to/project

Yolo mode

Same as chat. But with access to edit and delete files with no backups. Will destroy your project. Might code a turd. (rm, mv, mkdir, sed)

yolo.webm

It's like cursor, but less safe.

blitzdenk yolo openai

Currently Supports

Any model. Might fail on some.

  • OpenAi (gpt4.1, best so far)
  • Ollama (qwen3, pretty good)

Neovim

It's a simple no-border tui. Perfect to use in the Neovim term buffer.

vim.keymap.set(("n", "<leader>o", ":vsplit term:// blitzdenk chat openai<CR>:startinsert<CR>", {})

The AI pipeline approach

Agents running in a loop tend to explode small lies into big ones after n iterations. So instead of looping the best way to get good results is in a forward pipeline.

Question -> collect context -> answer -> correction. Restart

Conclusion: Restart chats often. 1 question/task per chat.

Dependencies

~21–35MB
~536K SLoC